LICENSING NOTICE¶
Note that all users who use Vital DB, an open biosignal dataset, must agree to the Data Use Agreement below. If you do not agree, please close this window. The Data Use Agreement is available here: https://vitaldb.net/dataset/#h.vcpgs1yemdb5
This is the development version of the project code¶
For the Project Draft submission see the DL4H_Team_24_Project_Draft.ipynb notebook in the project repository.
Project repository¶
The project repository can be found at: https://github.com/abarrie2/cs598-dlh-project
Introduction¶
This project aims to reproduce findings from the paper titled "Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study" by Jo Y-Y et al. (2022) [1]. This study introduces a deep learning model that predicts intraoperative hypotension (IOH) events before they occur, utilizing a combination of arterial blood pressure (ABP), electroencephalogram (EEG), and electrocardiogram (ECG) signals.
Background of the Problem¶
Intraoperative hypotension (IOH) is a common and significant surgical complication defined by a mean arterial pressure drop below 65 mmHg. It is associated with increased risks of myocardial infarction, acute kidney injury, and heightened postoperative mortality. Effective prediction and timely intervention can substantially enhance patient outcomes.
Evolution of IOH Prediction¶
Initial attempts to predict IOH primarily used arterial blood pressure (ABP) waveforms. A foundational study by Hatib F et al. (2018) titled "Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis" [2] showed that machine learning could forecast IOH events using ABP with reasonable accuracy. This finding spurred further research into utilizing various physiological signals for IOH prediction.
Subsequent advancements included the development of the Acumen™ hypotension prediction index, which was studied in "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial" by Bao X et al. (2024) [3]. This trial integrated a hypotension prediction index into blood pressure monitoring equipment, demonstrating its effectiveness in reducing the number and duration of IOH events during surgeries. Further study is needed to determine whether this resultant reduction in IOH events transalates into improved postoperative patient outcomes.
Current Study¶
Building on these advancements, the paper by Jo Y-Y et al. (2022) proposes a deep learning approach that enhances prediction accuracy by incorporating EEG and ECG signals along with ABP. This multi-modal method, evaluated over prediction windows of 3, 5, 10, and 15 minutes, aims to provide a comprehensive physiological profile that could predict IOH more accurately and earlier. Their results indicate that the combination of ABP and EEG significantly improves performance metrics such as AUROC and AUPRC, outperforming models that use fewer signals or different combinations.
Our project seeks to reproduce and verify Jo Y-Y et al.'s results to assess whether this integrated approach can indeed improve IOH prediction accuracy, thereby potentially enhancing surgical safety and patient outcomes.
Scope of Reproducibility:¶
The original paper investigated the following hypotheses:
- Hypothesis 1: A model using ABP and ECG will outperform a model using ABP alone in predicting IOH.
- Hypothesis 2: A model using ABP and EEG will outperform a model using ABP alone in predicting IOH.
- Hypothesis 3: A model using ABP, EEG, and ECG will outperform a model using ABP alone in predicting IOH.
Results were compared using AUROC and AUPRC scores. Based on the results described in the original paper, we expect that Hypothesis 2 will be confirmed, and that Hypotheses 1 and 3 will not be confirmed.
In order to perform the corresponding experiments, we will implement a CNN-based model that can be configured to train and infer using the following four model variations:
- ABP data alone
- ABP and ECG data
- ABP and EEG data
- ABP, ECG, and EEG data
We will measure the performance of these configurations using the same AUROC and AUPRC metrics as used in the original paper. To test hypothesis 1 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 2. To test hypothesis 2 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 3. To test hypothesis 3 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 4. For all of the above measures and experiment combinations, we will operate multiple experiments where the time-to-IOH event prediction will use the following prediction windows:
- 3 minutes before event
- 5 minutes before event
- 10 minutes before event
- 15 minutes before event
In the event that we are compute-bound, we will prioritize the 3-minute prediction window experiments as they are the most relevant to the original paper's findings.
The predictive power of ABP, ECG and ABP + ECG models at 3-, 5-, 10- and 15-minute prediction windows:
Modifications made for demo mode¶
In order to demonstrate the functioning of the code in a short (ie, <8 minute limit) the following options and modifications were used:
MAX_CASESwas set to 20. The total number of cases to be used in the full training set is 3296, but the smaller numbers allows demonstration of each section of the pipeline.vitaldb_cacheis prepopulated in Google Colab. The cache file is approx. 800MB and contains the raw and mini-fied copies of the source dataset and is downloaded from Google Drive. This is much faster than using thevitaldbAPI, but is again only a fraction of the data. The full dataset can be downloaded with the API or prepopulated by following the instructions in the "Bulk Data Download" section below.max_epochsis set to 6. With the small dataset, training is fast and shows the decreasing training and validation losses. In the full model run,max_epochswill be set to 100. In both cases early stopping is enabled and will stop training if the validation losses stop decreasing for five consecutive epochs.- Only the "ABP + EEG" combination will be run. In the final report, additional combinations will be run, as discussed later.
- Only the 3-minute prediction window will be run. In the final report, additional prediction windows (5, 10 and 15 minutes) will be run, as discussed later.
- No ablations are run in the demo. These will be completed for the final report.
Methodology¶
Methodology from Final Rubrik¶
- Environment
- Python version
- Dependencies/packages needed
- Data
- Data download instruction
- Data descriptions with helpful charts and visualizations
- Preprocessing code + command
- Model
- Citation to the original paper
- Link to the original paper’s repo (if applicable)
- Model descriptions
- Implementation code
- Pretrained model (if applicable)
- Training
- Hyperparams
- Report at least 3 types of hyperparameters such as learning rate, batch size, hidden size, dropout
- Computational requirements
- Report at least 3 types of requirements such as type of hardware, average runtime for each epoch, total number of trials, GPU hrs used, # training epochs
- Training code
- Hyperparams
- Evaluation
- Metrics descriptions
- Evaluation code
The methodology section is composed of the following subsections: Environment, Data and Model.
- Environment: This section describes the setup of the environment, including the installation of necessary libraries and the configuration of the runtime environment.
- Data: This section describes the dataset used in the study, including its collection and preprocessing.
- Data Collection: This section describes the process of downloading the dataset from VitalDB and populating the local data cache.
- Data Preprocessing: This section describes the preprocessing steps applied to the dataset, including data selection, data cleaning, and feature extraction.
- Model: This section describes the deep learning model used in the study, including its implementation, training, and evaluation.
- Model Implementation: This section describes the implementation of the deep learning model, including the architecture, loss function, and optimization algorithm.
- Model Training: This section describes the training process, including the training loop, hyperparameters, and training strategy.
- Model Evaluation: This section describes the evaluation process, including the metrics used, the evaluation strategy, and the results obtained.
Environment¶
Create environment¶
The environment setup differs based on whether you are running the code on a local machine or on Google Colab. The following sections provide instructions for setting up the environment in each case.
Local machine¶
Create conda environment for the project using the environment.yml file:
conda env create --prefix .envs/dlh-team24 -f environment.yml
Activate the environment with:
conda activate .envs/dlh-team24
Google Colab¶
The following code snippet installs the required packages and downloads the necessary files in a Google Colab environment:
# Google Colab environments have a `/content` directory. Use this as a proxy for running Colab-only code
COLAB_ENV = "google.colab" in str(get_ipython())
if COLAB_ENV:
#install vitaldb
%pip install vitaldb
# Executing in Colab therefore download cached preprocessed data.
# TODO: Integrate this with the setup local cache data section below.
# Check for file existence before overwriting.
import gdown
gdown.download(id="15b5Nfhgj3McSO2GmkVUKkhSSxQXX14hJ", output="vitaldb_cache.tgz")
!tar -zxf vitaldb_cache.tgz
# Download sqi_filter.csv from github repo
!wget https://raw.githubusercontent.com/abarrie2/cs598-dlh-project/main/sqi_filter.csv
All other required packages are already installed in the Google Colab environment.
Load environment¶
# Import packages
import os
import random
import copy
from collections import defaultdict
from timeit import default_timer as timer
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, roc_auc_score, precision_recall_curve, auc, confusion_matrix
from sklearn.metrics import RocCurveDisplay, PrecisionRecallDisplay, average_precision_score
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
import torch
from torch.utils.data import Dataset
import vitaldb
import h5py
import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
from datetime import datetime
Set random seeds to generate consistent results:
RANDOM_SEED = 42
random.seed(RANDOM_SEED)
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
if torch.cuda.is_available():
torch.cuda.manual_seed(RANDOM_SEED)
torch.cuda.manual_seed_all(RANDOM_SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ["PYTHONHASHSEED"] = str(RANDOM_SEED)
Set device to GPU or MPS if available
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if (torch.backends.mps.is_available() and torch.backends.mps.is_built()) else "cpu")
print(f"Using device: {device}")
Using device: mps
Data¶
Data Description¶
Source¶
Data for this project is sourced from the open biosignal VitalDB dataset as described in "VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients" by Lee H-C et al. (2022) [4], which contains perioperative vital signs and numerical data from 6,388 cases of non-cardiac (general, thoracic, urological, and gynecological) surgery patients who underwent routine or emergency surgery at Seoul National University Hospital between 2016 and 2017. The dataset includes ABP, ECG, and EEG signals, as well as other physiological data. The dataset is available through an API and Python library, and at PhysioNet: https://physionet.org/content/vitaldb/1.0.0/
Statistics¶
Characteristics of the dataset:
| Characteristic | Value | Details |
|---|---|---|
| Total number of cases | 6,388 | |
| Sex (male) | 3,243 (50.8%) | |
| Age (years) | 59 | Range: 48-68 |
| Height (cm) | 162 | Range: 156-169 |
| Weight (kg) | 61 | Range: 53-69 |
| Tram-Rac 4A tracks | 6,355 (99.5%) | Sampling rate: 500Hz |
| BIS Vista tracks | 5,566 (87.1%) | Sampling rate: 128Hz |
| Case duration (min) | 189 | Range: 27-1041 |
Labels are only known after processing the data. In the original paper, there were an average of 1.6 IOH events per case and 5.7 non-events per case so we expect approximately 10,221 IOH events and 364,116 non-events in the dataset.
Data Processing¶
Data will be processed as follows:
- Load the dataset from VitalDB, or from a local cache if previously downloaded.
- Apply the inclusion and exclusion selection criteria to filter the dataset according to surgery metadata.
- Generate a minified dataset by discarding all tracks except ABP, ECG, and EEG.
- Preprocess the data by applying band-pass and z-score normalization to the ECG and EEG signals, and filtering out ABP signals below a Signal Quality Index (SQI) threshold.
- Generate event and non-event samples by extracting 60-second segments around IOH events and non-events.
- Split the dataset into training, validation, and test sets with a 6:1:3 ratio, ensuring that samples from a single case are not split across different sets to avoid data leakage.
Set Up Local Data Caches¶
VitalDB data is static, so local copies can be stored and reused to avoid expensive downloads and to speed up data processing.
The default directory defined below is in the project .gitignore file. If this is modified, the new directory should also be added to the project .gitignore.
VITALDB_CACHE = './vitaldb_cache'
VITAL_ALL = f"{VITALDB_CACHE}/vital_all"
VITAL_MINI = f"{VITALDB_CACHE}/vital_mini"
VITAL_METADATA = f"{VITALDB_CACHE}/metadata"
VITAL_MODELS = f"{VITALDB_CACHE}/models"
VITAL_PREPROCESS_SCRATCH = f"{VITALDB_CACHE}/data_scratch"
VITAL_EXTRACTED_SEGMENTS = f"{VITALDB_CACHE}/segments"
TRACK_CACHE = None
SEGMENT_CACHE = None
# when USE_MEMORY_CACHING is enabled, track data will be persisted in an in-memory cache. Not useful once we have already pre-extracted all event segments
# DON'T USE: Stores items in memory that are later not used. Causes OOM on segment extraction.
USE_MEMORY_CACHING = False
# When RESET_CACHE is set to True, it will ensure the TRACK_CACHE is disposed and recreated when we do dataset initialization.
# Use as a shortcut to wiping cache rather than restarting kernel
RESET_CACHE = False
PREDICTION_WINDOW = 3
#PREDICTION_WINDOW = 'ALL'
ALL_PREDICTION_WINDOWS = [3, 5, 10, 15]
# Maximum number of cases of interest for which to download data.
# Set to a small value (ex: 20) for demo purposes, else set to None to disable and download and process all.
MAX_CASES = None
#MAX_CASES = 300
# Preloading Cases: when true, all matched cases will have the _mini tracks extracted and put into in-mem dict
PRELOADING_CASES = False
PRELOADING_SEGMENTS = True
# Perform Data Preprocessing: do we want to take the raw vital file and extract segments of interest for training?
PERFORM_DATA_PREPROCESSING = False
if not os.path.exists(VITALDB_CACHE):
os.mkdir(VITALDB_CACHE)
if not os.path.exists(VITAL_ALL):
os.mkdir(VITAL_ALL)
if not os.path.exists(VITAL_MINI):
os.mkdir(VITAL_MINI)
if not os.path.exists(VITAL_METADATA):
os.mkdir(VITAL_METADATA)
if not os.path.exists(VITAL_MODELS):
os.mkdir(VITAL_MODELS)
if not os.path.exists(VITAL_PREPROCESS_SCRATCH):
os.mkdir(VITAL_PREPROCESS_SCRATCH)
if not os.path.exists(VITAL_EXTRACTED_SEGMENTS):
os.mkdir(VITAL_EXTRACTED_SEGMENTS)
print(os.listdir(VITALDB_CACHE))
['.DS_Store', 'vital_all', 'physionet', 'models', 'data_scratch', 'vital_mini', 'metadata', 'segments']
Bulk Data Download¶
This step is not required, but will significantly speed up downstream processing and avoid a high volume of API requests to the VitalDB web site.
The cache population code checks if the .vital files are locally available, and can be populated by calling the vitaldb API or by manually prepopulating the cache (recommended)
- Manually downloaded the dataset from the following site: https://physionet.org/content/vitaldb/1.0.0/
- Download the zip file in a browser, or
- Use
wget -r -N -c -np https://physionet.org/files/vitaldb/1.0.0/to download the files in a terminal
- Move the contents of
vital_filesinto the${VITAL_ALL}directory.
# Returns the Pandas DataFrame for the specified dataset.
# One of 'cases', 'labs', or 'trks'
# If the file exists locally, create and return the DataFrame.
# Else, download and cache the csv first, then return the DataFrame.
def vitaldb_dataframe_loader(dataset_name):
if dataset_name not in ['cases', 'labs', 'trks']:
raise ValueError(f'Invalid dataset name: {dataset_name}')
file_path = f'{VITAL_METADATA}/{dataset_name}.csv'
if os.path.isfile(file_path):
print(f'{dataset_name}.csv exists locally.')
df = pd.read_csv(file_path)
return df
else:
print(f'downloading {dataset_name} and storing in the local cache for future reuse.')
df = pd.read_csv(f'https://api.vitaldb.net/{dataset_name}')
df.to_csv(file_path, index=False)
return df
cases = vitaldb_dataframe_loader('cases')
cases = cases.set_index('caseid')
cases.shape
cases.csv exists locally.
(6388, 73)
cases.index.nunique()
6388
cases.head()
| subjectid | casestart | caseend | anestart | aneend | opstart | opend | adm | dis | icu_days | ... | intraop_colloid | intraop_ppf | intraop_mdz | intraop_ftn | intraop_rocu | intraop_vecu | intraop_eph | intraop_phe | intraop_epi | intraop_ca | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| caseid | |||||||||||||||||||||
| 1 | 5955 | 0 | 11542 | -552 | 10848.0 | 1668 | 10368 | -236220 | 627780 | 0 | ... | 0 | 120 | 0.0 | 100 | 70 | 0 | 10 | 0 | 0 | 0 |
| 2 | 2487 | 0 | 15741 | -1039 | 14921.0 | 1721 | 14621 | -221160 | 1506840 | 0 | ... | 0 | 150 | 0.0 | 0 | 100 | 0 | 20 | 0 | 0 | 0 |
| 3 | 2861 | 0 | 4394 | -590 | 4210.0 | 1090 | 3010 | -218640 | 40560 | 0 | ... | 0 | 0 | 0.0 | 0 | 50 | 0 | 0 | 0 | 0 | 0 |
| 4 | 1903 | 0 | 20990 | -778 | 20222.0 | 2522 | 17822 | -201120 | 576480 | 1 | ... | 0 | 80 | 0.0 | 100 | 100 | 0 | 50 | 0 | 0 | 0 |
| 5 | 4416 | 0 | 21531 | -1009 | 22391.0 | 2591 | 20291 | -67560 | 3734040 | 13 | ... | 0 | 0 | 0.0 | 0 | 160 | 0 | 10 | 900 | 0 | 2100 |
5 rows × 73 columns
cases['sex'].value_counts()
sex M 3243 F 3145 Name: count, dtype: int64
Tracks¶
trks = vitaldb_dataframe_loader('trks')
trks = trks.set_index('caseid')
trks.shape
trks.csv exists locally.
(486449, 2)
trks.index.nunique()
6388
trks.groupby('caseid')[['tid']].count().plot();
trks.groupby('caseid')[['tid']].count().hist();
trks.groupby('tname').count().sort_values(by='tid', ascending=False)
| tid | |
|---|---|
| tname | |
| Solar8000/HR | 6387 |
| Solar8000/PLETH_SPO2 | 6386 |
| Solar8000/PLETH_HR | 6386 |
| Primus/CO2 | 6362 |
| Primus/PAMB_MBAR | 6361 |
| ... | ... |
| Orchestra/AMD_VOL | 1 |
| Solar8000/ST_V5 | 1 |
| Orchestra/NPS_VOL | 1 |
| Orchestra/AMD_RATE | 1 |
| Orchestra/VEC_VOL | 1 |
196 rows × 1 columns
Parameters of Interest¶
Hemodynamic Parameters Reference¶
SNUADC/ART
arterial blood pressure waveform
Parameter, Description, Type/Hz, Unit
SNUADC/ART, Arterial pressure wave, W/500, mmHg
trks[trks['tname'].str.contains('SNUADC/ART')].shape
(3645, 2)
SNUADC/ECG_II
electrocardiogram waveform
Parameter, Description, Type/Hz, Unit
SNUADC/ECG_II, ECG lead II wave, W/500, mV
trks[trks['tname'].str.contains('SNUADC/ECG_II')].shape
(6355, 2)
BIS/EEG1_WAV
electroencephalogram waveform
Parameter, Description, Type/Hz, Unit
BIS/EEG1_WAV, EEG wave from channel 1, W/128, uV
trks[trks['tname'].str.contains('BIS/EEG1_WAV')].shape
(5871, 2)
Cases of Interest¶
These are the subset of case ids for which modelling and analysis will be performed based upon inclusion criteria and waveform data availability.
# TRACK NAMES is used for metadata analysis via API
TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV']
TRACK_SRATES = [500, 500, 128]
# EXTRACTION TRACK NAMES adds the EVENT track which is only used when doing actual file i/o
EXTRACTION_TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV', 'EVENT']
EXTRACTION_TRACK_SRATES = [500, 500, 128, 1]
# As in the paper, select cases which meet the following criteria:
#
# For patients, the inclusion criteria were as follows:
# (1) adults (age >= 18)
# (2) administered general anaesthesia
# (3) undergone non-cardiac surgery.
#
# For waveform data, the inclusion criteria were as follows:
# (1) no missing monitoring for ABP, ECG, and EEG waveforms
# (2) no cases containing false events or non-events due to poor signal quality
# (checked in second stage of data preprocessing)
# Adult
inclusion_1 = cases.loc[cases['age'] >= 18].index
print(f'{len(cases)-len(inclusion_1)} cases excluded, {len(inclusion_1)} remaining due to age criteria')
# General Anesthesia
inclusion_2 = cases.loc[cases['ane_type'] == 'General'].index
print(f'{len(cases)-len(inclusion_2)} cases excluded, {len(inclusion_2)} remaining due to anesthesia criteria')
# Non-cardiac surgery
inclusion_3 = cases.loc[
~cases['opname'].str.contains("cardiac", case=False)
& ~cases['opname'].str.contains("aneurysmal", case=False)
].index
print(f'{len(cases)-len(inclusion_3)} cases excluded, {len(inclusion_3)} remaining due to non-cardiac surgery criteria')
# ABP, ECG, EEG waveforms
inclusion_4 = trks.loc[trks['tname'].isin(TRACK_NAMES)].index.value_counts()
inclusion_4 = inclusion_4[inclusion_4 == len(TRACK_NAMES)].index
print(f'{len(cases)-len(inclusion_4)} cases excluded, {len(inclusion_4)} remaining due to missing waveform data')
# SQI filter
# NOTE: this depends on a sqi_filter.csv generated by external processing
inclusion_5 = pd.read_csv('sqi_filter.csv', header=None, names=['caseid','sqi']).set_index('caseid').index
print(f'{len(cases)-len(inclusion_5)} cases excluded, {len(inclusion_5)} remaining due to SQI threshold not being met')
# Only include cases with known good waveforms.
exclusion_6 = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index
inclusion_6 = cases.index.difference(exclusion_6)
print(f'{len(cases)-len(inclusion_6)} cases excluded, {len(inclusion_6)} remaining due to malformed waveforms')
cases_of_interest_idx = inclusion_1 \
.intersection(inclusion_2) \
.intersection(inclusion_3) \
.intersection(inclusion_4) \
.intersection(inclusion_5) \
.intersection(inclusion_6)
cases_of_interest = cases.loc[cases_of_interest_idx]
print()
print(f'{cases_of_interest_idx.shape[0]} out of {cases.shape[0]} total cases remaining after exclusions applied')
# Trim cases of interest to MAX_CASES
if MAX_CASES:
cases_of_interest_idx = cases_of_interest_idx[:MAX_CASES]
print(f'{cases_of_interest_idx.shape[0]} cases of interest selected')
57 cases excluded, 6331 remaining due to age criteria 345 cases excluded, 6043 remaining due to anesthesia criteria 14 cases excluded, 6374 remaining due to non-cardiac surgery criteria 3019 cases excluded, 3369 remaining due to missing waveform data 0 cases excluded, 6388 remaining due to SQI threshold not being met 186 cases excluded, 6202 remaining due to malformed waveforms 3110 out of 6388 total cases remaining after exclusions applied 3110 cases of interest selected
cases_of_interest.head(n=5)
| subjectid | casestart | caseend | anestart | aneend | opstart | opend | adm | dis | icu_days | ... | intraop_colloid | intraop_ppf | intraop_mdz | intraop_ftn | intraop_rocu | intraop_vecu | intraop_eph | intraop_phe | intraop_epi | intraop_ca | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| caseid | |||||||||||||||||||||
| 1 | 5955 | 0 | 11542 | -552 | 10848.0 | 1668 | 10368 | -236220 | 627780 | 0 | ... | 0 | 120 | 0.0 | 100 | 70 | 0 | 10 | 0 | 0 | 0 |
| 4 | 1903 | 0 | 20990 | -778 | 20222.0 | 2522 | 17822 | -201120 | 576480 | 1 | ... | 0 | 80 | 0.0 | 100 | 100 | 0 | 50 | 0 | 0 | 0 |
| 7 | 5124 | 0 | 15770 | 477 | 14817.0 | 3177 | 14577 | -154320 | 623280 | 3 | ... | 0 | 0 | 0.0 | 0 | 120 | 0 | 0 | 0 | 0 | 0 |
| 10 | 2175 | 0 | 20992 | -1743 | 21057.0 | 2457 | 19857 | -220740 | 3580860 | 1 | ... | 0 | 90 | 0.0 | 0 | 110 | 0 | 20 | 500 | 0 | 600 |
| 12 | 491 | 0 | 31203 | -220 | 31460.0 | 5360 | 30860 | -208500 | 1519500 | 4 | ... | 200 | 100 | 0.0 | 100 | 70 | 0 | 20 | 0 | 0 | 3300 |
5 rows × 73 columns
Tracks of Interest¶
These are the subset of tracks (waveforms) for the cases of interest identified above.
# A single case maps to one or more waveform tracks. Select only the tracks required for analysis.
trks_of_interest = trks.loc[cases_of_interest_idx][trks.loc[cases_of_interest_idx]['tname'].isin(TRACK_NAMES)]
trks_of_interest.shape
(9330, 2)
trks_of_interest.head(n=5)
| tname | tid | |
|---|---|---|
| caseid | ||
| 1 | BIS/EEG1_WAV | 0aa685df768489a18a5e9f53af0d83bf60890c73 |
| 1 | SNUADC/ART | 724cdd7184d7886b8f7de091c5b135bd01949959 |
| 1 | SNUADC/ECG_II | 8c9161aaae8cb578e2aa7b60f44234d98d2b3344 |
| 4 | BIS/EEG1_WAV | 1b4c2379be3397a79d3787dd810190150dc53f27 |
| 4 | SNUADC/ART | e28777c4706fe3a5e714bf2d91821d22d782d802 |
trks_of_interest_idx = trks_of_interest.set_index('tid').index
trks_of_interest_idx.shape
(9330,)
Build Tracks Cache for Local Processing¶
Tracks data are large and therefore expensive to download every time used.
By default, the .vital file format stores all tracks for each case internally. Since only select tracks per case are required, each .vital file can be further reduced by discarding the unused tracks.
# Ensure the full vital file dataset is available for cases of interest.
count_downloaded = 0
count_present = 0
#for i, idx in enumerate(cases.index):
for idx in cases_of_interest_idx:
full_path = f'{VITAL_ALL}/{idx:04d}.vital'
if not os.path.isfile(full_path):
print(f'Missing vital file: {full_path}')
# Download and save the file.
vf = vitaldb.VitalFile(idx)
vf.to_vital(full_path)
count_downloaded += 1
else:
count_present += 1
print()
print(f'Count of cases of interest: {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files downloaded: {count_downloaded}')
print(f'Count of vital files already present: {count_present}')
Count of cases of interest: 3110 Count of vital files downloaded: 0 Count of vital files already present: 3110
# Convert vital files to "mini" versions including only the subset of tracks defined in TRACK_NAMES above.
# Only perform conversion for the cases of interest.
# NOTE: If this cell is interrupted, it can be restarted and will continue where it left off.
count_minified = 0
count_present = 0
count_missing_tracks = 0
count_not_fixable = 0
vf = vitaldb.VitalFile('./vitaldb_cache/vital_all/0001.vital', EXTRACTION_TRACK_NAMES)
print(vf)
# If set to true, local mini files are checked for all tracks even if already present.
FORCE_VALIDATE = False
for idx in cases_of_interest_idx:
full_path = f'{VITAL_ALL}/{idx:04d}.vital'
mini_path = f'{VITAL_MINI}/{idx:04d}_mini.vital'
if FORCE_VALIDATE or not os.path.isfile(mini_path):
print(f'Creating mini vital file: {idx}')
vf = vitaldb.VitalFile(full_path, EXTRACTION_TRACK_NAMES)
if len(vf.get_track_names()) != 4:
print(f'Missing track in vital file: {idx}, {set(EXTRACTION_TRACK_NAMES).difference(set(vf.get_track_names()))}')
count_missing_tracks += 1
# Attempt to download from VitalDB directly and see if missing tracks are present.
vf = vitaldb.VitalFile(idx, EXTRACTION_TRACK_NAMES)
if len(vf.get_track_names()) != 3:
print(f'Unable to fix missing tracks: {idx}')
count_not_fixable += 1
continue
if vf.get_track_samples(EXTRACTION_TRACK_NAMES[0], 1/EXTRACTION_TRACK_SRATES[0]).shape[0] == 0:
print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[0]}')
count_not_fixable += 1
continue
if vf.get_track_samples(EXTRACTION_TRACK_NAMES[1], 1/EXTRACTION_TRACK_SRATES[1]).shape[0] == 0:
print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[1]}')
count_not_fixable += 1
continue
if vf.get_track_samples(EXTRACTION_TRACK_NAMES[2], 1/EXTRACTION_TRACK_SRATES[2]).shape[0] == 0:
print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[2]}')
count_not_fixable += 1
continue
# if vf.get_track_samples(EXTRACTION_TRACK_NAMES[3], 1/EXTRACTION_TRACK_SRATES[3]).shape[0] == 0:
# print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[3]}')
# count_not_fixable += 1
# continue
vf.to_vital(mini_path)
count_minified += 1
else:
count_present += 1
print()
print(f'Count of cases of interest: {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files minified: {count_minified}')
print(f'Count of vital files already present: {count_present}')
print(f'Count of vital files missing tracks: {count_missing_tracks}')
print(f'Count of vital files not fixable: {count_not_fixable}')
VitalFile('./vitaldb_cache/vital_all/0001.vital', '['EVENT', 'SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV']')
Count of cases of interest: 3110
Count of vital files minified: 0
Count of vital files already present: 3110
Count of vital files missing tracks: 0
Count of vital files not fixable: 0
Validate Mini Files¶
# Convert vital files to "mini" versions including only the subset of tracks defined in TRACK_NAMES above.
# Only perform conversion for the cases of interest.
# NOTE: If this cell is interrupted, it can be restarted and will continue where it left off.
count_missing_tracks = 0
# If true, perform fast validate that all mini files have 3 tracks.
FORCE_VALIDATE = False
if FORCE_VALIDATE:
for idx in cases_of_interest_idx:
mini_path = f'{VITAL_MINI}/{idx:04d}_mini.vital'
if os.path.isfile(mini_path):
vf = vitaldb.VitalFile(mini_path)
if len(vf.get_track_names()) != 3:
print(f'Missing track in vital file: {idx}, {set(TRACK_NAMES).difference(set(vf.get_track_names()))}')
count_missing_tracks += 1
print()
print(f'Count of cases of interest: {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files missing tracks: {count_missing_tracks}')
Count of cases of interest: 3110 Count of vital files missing tracks: 0
Filtering¶
Preprocessing characteristics are different for each of the three signal categories:
- ABP: no preprocessing, use as-is
- ECG: apply a 1-40Hz bandpass filter, then perform Z-score normalization
- EEG: apply a 0.5-50Hz bandpass filter
apply_bandpass_filter() implements the bandpass filter using scipy.signal
apply_zscore_normalization() implements the Z-score normalization using numpy
from scipy.signal import butter, lfilter, spectrogram
# define two methods for data preprocessing
def apply_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter(order, [lowcut, highcut], fs=fs, btype='band')
y = lfilter(b, a, np.nan_to_num(data))
return y
def apply_zscore_normalization(signal):
mean = np.nanmean(signal)
std = np.nanstd(signal)
return (signal - mean) / std
# Filtering Demonstration
# temp experimental, code to be incorporated into overall preloader process
# for now it's just dumping example plots of the before/after filtered signal data
caseidx = 1
file_path = f"{VITAL_MINI}/{caseidx:04d}_mini.vital"
vf = vitaldb.VitalFile(file_path, TRACK_NAMES)
originalAbp = None
filteredAbp = None
originalEcg = None
filteredEcg = None
originalEeg = None
filteredEeg = None
ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"
for i, (track_name, rate) in enumerate(zip(TRACK_NAMES, TRACK_SRATES)):
# Get samples for this track
track_samples = vf.get_track_samples(track_name, 1/rate)
#track_samples, _ = vf.get_samples(track_name, 1/rate)
print(f"Track {track_name} @ {rate}Hz shape {len(track_samples)}")
if track_name == ABP_TRACK_NAME:
# ABP waveforms are used without further pre-processing
originalAbp = track_samples
filteredAbp = track_samples
elif track_name == ECG_TRACK_NAME:
originalEcg = track_samples
# ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
# first apply bandpass filter
filteredEcg = apply_bandpass_filter(track_samples, 1, 40, rate)
# then do z-score normalization
filteredEcg = apply_zscore_normalization(filteredEcg)
elif track_name == EEG_TRACK_NAME:
# EEG waveforms are band-pass filtered between 0.5 and 50 Hz
originalEeg = track_samples
filteredEeg = apply_bandpass_filter(track_samples, 0.5, 50, rate, 2)
def plotSignal(data, title):
plt.figure(figsize=(20, 5))
plt.plot(data)
plt.title(title)
plt.show()
plotSignal(originalAbp, "Original ABP")
plotSignal(originalAbp, "Unfiltered ABP")
plotSignal(originalEcg, "Original ECG")
plotSignal(filteredEcg, "Filtered ECG")
plotSignal(originalEeg, "Original EEG")
plotSignal(filteredEeg, "Filtered EEG")
Track SNUADC/ART @ 500Hz shape 5770575 Track SNUADC/ECG_II @ 500Hz shape 5770575 Track BIS/EEG1_WAV @ 128Hz shape 1477268
# Preprocess data tracks
ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"
EVENT_TRACK_NAME = "EVENT"
MINI_FILE_FOLDER = VITAL_MINI
CACHE_FILE_FOLDER = VITAL_PREPROCESS_SCRATCH
if RESET_CACHE:
TRACK_CACHE = None
SEGMENT_CACHE = None
if TRACK_CACHE is None:
TRACK_CACHE = {}
SEGMENT_CACHE = {}
def get_track_data(case, print_when_file_loaded = False):
parsedFile = None
abp = None
eeg = None
ecg = None
events = None
for i, (track_name, rate) in enumerate(zip(EXTRACTION_TRACK_NAMES, EXTRACTION_TRACK_SRATES)):
# use integer case id and track name, delimited by pipe, as cache key
cache_label = f"{case}|{track_name}"
if cache_label not in TRACK_CACHE:
if parsedFile is None:
file_path = f"{MINI_FILE_FOLDER}/{case:04d}_mini.vital"
if print_when_file_loaded:
print(f"[{datetime.now()}] Loading vital file {file_path}")
parsedFile = vitaldb.VitalFile(file_path, EXTRACTION_TRACK_NAMES)
dataset = np.array(parsedFile.get_track_samples(track_name, 1/rate))
if track_name == ABP_TRACK_NAME:
# no filtering for ABP
abp = dataset
abp = pd.DataFrame(abp).ffill(axis=0).bfill(axis=0)[0].values
if USE_MEMORY_CACHING:
TRACK_CACHE[cache_label] = abp
elif track_name == ECG_TRACK_NAME:
ecg = dataset
# apply ECG filtering: first bandpass then do z-score normalization
ecg = pd.DataFrame(ecg).ffill(axis=0).bfill(axis=0)[0].values
ecg = apply_bandpass_filter(ecg, 1, 40, rate, 2)
ecg = apply_zscore_normalization(ecg)
if USE_MEMORY_CACHING:
TRACK_CACHE[cache_label] = ecg
elif track_name == EEG_TRACK_NAME:
eeg = dataset
eeg = pd.DataFrame(eeg).ffill(axis=0).bfill(axis=0)[0].values
# apply EEG filtering: bandpass only
eeg = apply_bandpass_filter(eeg, 0.5, 50, rate, 2)
if USE_MEMORY_CACHING:
TRACK_CACHE[cache_label] = eeg
elif track_name == EVENT_TRACK_NAME:
events = dataset
if USE_MEMORY_CACHING:
TRACK_CACHE[cache_label] = events
else:
# cache hit, pull from cache
if track_name == ABP_TRACK_NAME:
abp = TRACK_CACHE[cache_label]
elif track_name == ECG_TRACK_NAME:
ecg = TRACK_CACHE[cache_label]
elif track_name == EEG_TRACK_NAME:
eeg = TRACK_CACHE[cache_label]
elif track_name == EVENT_TRACK_NAME:
events = TRACK_CACHE[cache_label]
return (abp, ecg, eeg, events)
# ABP waveforms are used without further pre-processing
# ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
# EEG waveforms are band-pass filtered between 0.5 and 50 Hz
if PRELOADING_CASES:
# determine disk cache file label
maxlabel = "ALL"
if MAX_CASES is not None:
maxlabel = str(MAX_CASES)
picklefile = f"{CACHE_FILE_FOLDER}/{PREDICTION_WINDOW}_minutes_MAX{maxlabel}.trackcache"
for track in tqdm(cases_of_interest_idx):
# getting track data will cause a cache-check and fill when missing
# will also apply appropriate filtering per track
get_track_data(track, False)
print(f"Generated track cache, {len(TRACK_CACHE)} records generated")
def get_segment_data(file_path):
abp = None
eeg = None
ecg = None
if USE_MEMORY_CACHING:
if file_path in SEGMENT_CACHE:
(abp, ecg, eeg) = SEGMENT_CACHE[file_path]
return (abp, ecg, eeg)
try:
with h5py.File(file_path, 'r') as f:
abp = np.array(f['abp'])
ecg = np.array(f['ecg'])
eeg = np.array(f['eeg'])
abp = np.array(abp)
eeg = np.array(eeg)
ecg = np.array(ecg)
if len(abp) > 30000:
abp = abp[:30000]
elif len(ecg) < 30000:
abp = np.resize(abp, (30000))
if len(ecg) > 30000:
ecg = ecg[:30000]
elif len(ecg) < 30000:
ecg = np.resize(ecg, (30000))
if len(eeg) > 7680:
eeg = eeg[:7680]
elif len(eeg) < 7680:
eeg = np.resize(eeg, (7680))
if USE_MEMORY_CACHING:
SEGMENT_CACHE[file_path] = (abp, ecg, eeg)
except:
abp = None
ecg = None
eeg = None
return (abp, ecg, eeg)
The following method is adapted from the preprocessing block of reference [6] (https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb)
The approach first finds an interoperative hypotensive event in the ABP waveform. It then backtracks to earlier in the waveform to extract a 60 second segment representing the waveform feature to use as model input. The figure below shows an example of this approach and is reproduced from the VitalDB example notebook referenced above.

def getSurgeryBoundariesInSeconds(event, debug=False):
eventIndices = np.argwhere(event==event)
# we are looking for the last index where the string contains 'start
lastStart = 0
firstFinish = len(event)-1
# find last start
for idx in eventIndices:
if 'started' in event[idx[0]]:
if debug:
print(event[idx[0]])
print(idx[0])
lastStart = idx[0]
# find first finish
for idx in eventIndices:
if 'finish' in event[idx[0]]:
if debug:
print(event[idx[0]])
print(idx[0])
firstFinish = idx[0]
break
if debug:
print(f'lastStart, firstFinish: {lastStart}, {firstFinish}')
return (lastStart, firstFinish)
def areCaseSegmentsCached(caseid):
seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
return os.path.exists(seg_folder) and len(os.listdir(seg_folder)) > 0
def isAbpSegmentValidNumpy(samples, debug=False):
valid = True
if np.isnan(samples).mean() > 0.1:
valid = False
if debug:
print(f">10% NaN")
elif (samples > 200).any():
valid = False
if debug:
print(f"Presence of BP > 200")
elif (samples < 30).any():
valid = False
if debug:
print(f"Presence of BP < 30")
elif np.max(samples) - np.min(samples) < 30:
if debug:
print(f"Max - Min test < 30")
valid = False
elif (np.abs(np.diff(samples)) > 30).any(): # abrupt change -> noise
if debug:
print(f"Abrupt change (noise)")
valid = False
return valid
def isAbpSegmentValid(vf, debug=False):
ABP_ECG_SRATE_HZ = 500
ABP_TRACK_NAME = "SNUADC/ART"
samples = np.array(vf.get_track_samples(ABP_TRACK_NAME, 1/ABP_ECG_SRATE_HZ))
return isAbpSegmentValidNumpy(samples, debug)
def saveCaseSegments(caseid, positiveSegments, negativeSegments, compresslevel=9, debug=False, forceWrite=False):
if len(positiveSegments) == 0 and len(negativeSegments) == 0:
# exit early if no events found
print(f'{caseid}: exit early, no segments to save')
return
# event composition
# predictiveSegmentStart in seconds, predictiveSegmentEnd in seconds, predWindow (0 for negative), abp, ecg, eeg)
# 0start, 1end, 2predwindow, 3abp, 4ecg, 5eeg
seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
if not os.path.exists(seg_folder):
# if directory needs to be created, then there are no cached segments
os.mkdir(seg_folder)
else:
if not forceWrite:
# exit early if folder already exists, case already produced
return
# prior to writing files out, clear existing files
for filename in os.listdir(seg_folder):
file_path = os.path.join(seg_folder, filename)
if debug:
print(f'deleting: {file_path}')
try:
if os.path.isfile(file_path):
os.unlink(file_path)
except Exception as e:
print('Failed to delete %s. Reason: %s' % (file_path, e))
count_pos_saved = 0
for i in range(0, len(positiveSegments)):
event = positiveSegments[i]
startIndex = event[0]
endIndex = event[1]
predWindow = event[2]
abp = event[3]
#ecg = event[4]
#eeg = event[5]
seg_filename = f"{caseid:04d}_{startIndex}_{predWindow:02d}_True.h5"
seg_fullpath = f"{seg_folder}/{seg_filename}"
if isAbpSegmentValidNumpy(abp, debug):
count_pos_saved += 1
abp = abp.tolist()
ecg = event[4].tolist()
eeg = event[5].tolist()
f = h5py.File(seg_fullpath, "w")
f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
f.flush()
f.close()
f = None
abp = None
ecg = None
eeg = None
# f.create_dataset('label', data=[1], compression="gzip", compression_opts=compresslevel)
# f.create_dataset('pred_window', data=[event[2]], compression="gzip", compression_opts=compresslevel)
# f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
elif debug:
print(f"{caseid:04d} {predWindow:02d}min {startIndex} starttime = ignored, segment validity issues")
count_neg_saved = 0
for i in range(0, len(negativeSegments)):
event = negativeSegments[i]
startIndex = event[0]
endIndex = event[1]
predWindow = event[2]
abp = event[3]
#ecg = event[4]
#eeg = event[5]
seg_filename = f"{caseid:04d}_{startIndex}_0_False.h5"
seg_fullpath = f"{seg_folder}/{seg_filename}"
if isAbpSegmentValidNumpy(abp, debug):
count_neg_saved += 1
abp = abp.tolist()
ecg = event[4].tolist()
eeg = event[5].tolist()
f = h5py.File(seg_fullpath, "w")
f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
f.flush()
f.close()
f = None
abp = None
ecg = None
eeg = None
# f.create_dataset('label', data=[0], compression="gzip", compression_opts=compresslevel)
# f.create_dataset('pred_window', data=[0], compression="gzip", compression_opts=compresslevel)
# f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
elif debug:
print(f"{caseid:04d} CleanWindow {startIndex} starttime = ignored, segment validity issues")
if count_neg_saved == 0 and count_pos_saved == 0:
print(f'{caseid}: nothing saved, all segments filtered')
# Generate hypotensive events
# Hypotensive events are defined as a 1-minute interval with sustained ABP of less than 65 mmHg
# Note: Hypotensive events should be at least 20 minutes apart to minimize potential residual effects from previous events
# Generate hypotension non-events
# To sample non-events, 30-minute segments where the ABP was above 75 mmHG were selected, and then
# three one-minute samples of each waveform were obtained from the middle of the segment
# both occur in extract_segments
#VITAL_EXTRACTED_SEGMENTS
def extract_segments(cases_of_interest_idx, debug=False, checkCache=True, forceWrite=False, returnSegments=False):
# Sampling rate for ABP and ECG, Hz. These rates should be the same. Default = 500
ABP_ECG_SRATE_HZ = 500
# Sampling rate for EEG. Default = 128
EEG_SRATE_HZ = 128
# Final dataset for training and testing the model.
positiveSegmentsMap = {}
negativeSegmentsMap = {}
iohEventsMap = {}
cleanEventsMap = {}
# Process each case and extract segments. For each segment identify presence of an event in the label zone.
count_cases = len(cases_of_interest_idx)
#for case_count, caseid in tqdm(enumerate(cases_of_interest_idx), total=count_cases):
for case_count, caseid in enumerate(cases_of_interest_idx):
if debug:
print(f'Loading case: {caseid:04d}, ({case_count + 1} of {count_cases})')
if checkCache and areCaseSegmentsCached(caseid):
if debug:
print(f'Skipping case: {caseid:04d}, already cached')
# skip records we've already cached
continue
# read the arterial waveform
(abp, ecg, eeg, event) = get_track_data(caseid)
if debug:
print(f'Length of {TRACK_NAMES[0]}: {abp.shape[0]}')
print(f'Length of {TRACK_NAMES[1]}: {ecg.shape[0]}')
print(f'Length of {TRACK_NAMES[2]}: {eeg.shape[0]}')
(startInSeconds, endInSeconds) = getSurgeryBoundariesInSeconds(event)
if debug:
print(f"Event markers indicate that surgery begins at {startInSeconds}s and ends at {endInSeconds}s.")
track_length_seconds = int(len(abp) / ABP_ECG_SRATE_HZ)
if debug:
print(f"Processing case {caseid} with length {track_length_seconds}s")
# check if the ABP segment in the surgery window is valid
if debug:
isSurgerySegmentValid = isAbpSegmentValidNumpy(abp[startInSeconds:endInSeconds])
print(f'{caseid}: surgery segment valid: {isSurgerySegmentValid}')
iohEvents = []
cleanEvents = []
i = 0
started = False
eofReached = False
trackStartIndex = None
# set i pointer (which operates in seconds) to start marker for surgery
i = startInSeconds
# FIRST PASS
# in the first forward pass, we are going to identify the start/end boundaries of all IOH events within the case
while i < track_length_seconds - 60 and i < endInSeconds:
segmentStart = None
segmentEnd = None
segFound = False
# look forward one minute
abpSeg = abp[i * ABP_ECG_SRATE_HZ:(i + 60) * ABP_ECG_SRATE_HZ]
# roll forward until we hit a one minute window where mean ABP >= 65 so we know leads are connected and it's tracking
if not started:
if np.nanmean(abpSeg) >= 65:
started = True
trackStartIndex = i
# if we're started and mean abp for the window is <65, we are starting a new IOH event
elif np.nanmean(abpSeg) < 65:
segmentStart = i
# now seek forward to find end of event, perpetually checking the lats minute of the IOH event
for j in range(i + 60, track_length_seconds):
# look backward one minute
abpSegForward = abp[(j - 60) * ABP_ECG_SRATE_HZ:j * ABP_ECG_SRATE_HZ]
if np.nanmean(abpSegForward) >= 65:
segmentEnd = j - 1
break
if segmentEnd is None:
eofReached = True
else:
# otherwise, end of the IOH segment has been reached, record it
iohEvents.append((segmentStart, segmentEnd))
segFound = True
if debug:
t_abp = abp[segmentStart * ABP_ECG_SRATE_HZ:segmentEnd * ABP_ECG_SRATE_HZ]
isIohSegmentValid = isAbpSegmentValidNumpy(t_abp)
print(f'{caseid}: ioh segment valid: {isIohSegmentValid}, {segmentStart}, {segmentEnd}, {t_abp.shape}')
i += 1
if not started:
continue
elif eofReached:
break
elif segFound:
i = segmentEnd + 1
# SECOND PASS
# in the second forward pass, we are going to identify the start/end boundaries of all non-overlapping 30 minute "clean" windows
# reuse the 'start of signal' index from our first pass
if trackStartIndex is None:
trackStartIndex = startInSeconds
i = trackStartIndex
eofReached = False
while i < track_length_seconds - 1800 and i < endInSeconds:
segmentStart = None
segmentEnd = None
segFound = False
startIndex = i
endIndex = i + 1800
# check to see if this 30 minute window overlaps any IOH events, if so ffwd to end of latest overlapping IOH
overlapFound = False
latestEnd = None
for event in iohEvents:
# case 1: starts during an event
if startIndex >= event[0] and startIndex < event[1]:
latestEnd = event[1]
overlapFound = True
# case 2: ends during an event
elif endIndex >= event[0] and endIndex < event[1]:
latestEnd = event[1]
overlapFound = True
# case 3: event occurs entirely inside of the window
elif startIndex < event[0] and endIndex > event[1]:
latestEnd = event[1]
overlapFound = True
# FFWD if we found an overlap
if overlapFound:
i = latestEnd + 1
continue
# look forward 30 minutes
abpSeg = abp[startIndex * ABP_ECG_SRATE_HZ:endIndex * ABP_ECG_SRATE_HZ]
# if we're started and mean abp for the window is >= 75, we are starting a new clean event
if np.nanmean(abpSeg) >= 75:
overlapFound = False
latestEnd = None
for event in iohEvents:
# case 1: starts during an event
if startIndex >= event[0] and startIndex < event[1]:
latestEnd = event[1]
overlapFound = True
# case 2: ends during an event
elif endIndex >= event[0] and endIndex < event[1]:
latestEnd = event[1]
overlapFound = True
# case 3: event occurs entirely inside of the window
elif startIndex < event[0] and endIndex > event[1]:
latestEnd = event[1]
overlapFound = True
if not overlapFound:
segFound = True
segmentEnd = endIndex
cleanEvents.append((startIndex, endIndex))
if debug:
t_abp = abp[startIndex * ABP_ECG_SRATE_HZ:endIndex * ABP_ECG_SRATE_HZ]
isCleanSegmentValid = isAbpSegmentValidNumpy(t_abp)
print(f'{caseid}: clean segment valid: {isCleanSegmentValid}, {startIndex}, {endIndex}, {t_abp.shape}')
i += 10
if segFound:
i = segmentEnd + 1
if debug:
print(f"IOH Events for case {caseid}: {iohEvents}")
print(f"Clean Events for case {caseid}: {cleanEvents}")
positiveSegments = []
negativeSegments = []
# THIRD PASS
# in the third pass, we will use the collections of ioh event windows to generate our actual extracted segments based on our prediction window (positive labels)
for i in range(0, len(iohEvents)):
if debug:
print(f"Checking event {iohEvents[i]}")
# we want to review current event boundaries, as well as previous event boundaries if available
event = iohEvents[i]
previousEvent = None
if i > 0:
previousEvent = iohEvents[i - 1]
for predWindow in ALL_PREDICTION_WINDOWS:
if debug:
print(f"Checking event {iohEvents[i]} for pred {predWindow}")
iohEventStart = event[0]
predictiveSegmentEnd = event[0] - (predWindow*60)
predictiveSegmentStart = predictiveSegmentEnd - 60
if (predictiveSegmentStart < 0):
# don't rewind before the beginning of the track
if debug:
print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before beginning")
continue
elif (predictiveSegmentStart < trackStartIndex):
# don't rewind before the beginning of signal in track
if debug:
print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before track start")
continue
elif previousEvent is not None:
# does this event window come before or during the previous event?
overlapFound = False
# case 1: starts during an event
if predictiveSegmentStart >= previousEvent[0] and predictiveSegmentStart < previousEvent[1]:
overlapFound = True
# case 2: ends during an event
elif iohEventStart >= previousEvent[0] and iohEventStart < previousEvent[1]:
overlapFound = True
# case 3: event occurs entirely inside of the window
elif predictiveSegmentStart < previousEvent[0] and iohEventStart > previousEvent[1]:
overlapFound = True
# do not extract a case if we overlap witha nother IOH
if overlapFound:
if debug:
print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, overlap with earlier segment")
continue
# track the positive segment
positiveSegments.append((predictiveSegmentStart, predictiveSegmentEnd, predWindow,
abp[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
ecg[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
eeg[predictiveSegmentStart*EEG_SRATE_HZ:predictiveSegmentEnd*EEG_SRATE_HZ]))
# FOURTH PASS
# in the fourth and final pass, we will use the collections of clean event windows to generate our actual extracted segments based (negative labels)
for i in range(0, len(cleanEvents)):
# everything will be 30 minutes long at least
event = cleanEvents[i]
# choose sample 1 @ 10 minutes
# choose sample 2 @ 15 minutes
# choose sample 3 @ 20 minutes
timeAtTen = event[0] + 600
timeAtFifteen = event[0] + 900
timeAtTwenty = event[0] + 1200
negativeSegments.append((timeAtTen, timeAtTen + 60, 0,
abp[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
ecg[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
eeg[timeAtTen*EEG_SRATE_HZ:(timeAtTen + 60)*EEG_SRATE_HZ]))
negativeSegments.append((timeAtFifteen, timeAtFifteen + 60, 0,
abp[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
ecg[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
eeg[timeAtFifteen*EEG_SRATE_HZ:(timeAtFifteen + 60)*EEG_SRATE_HZ]))
negativeSegments.append((timeAtTwenty, timeAtTwenty + 60, 0,
abp[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
ecg[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
eeg[timeAtTwenty*EEG_SRATE_HZ:(timeAtTwenty + 60)*EEG_SRATE_HZ]))
if returnSegments:
positiveSegmentsMap[caseid] = positiveSegments
negativeSegmentsMap[caseid] = negativeSegments
iohEventsMap[caseid] = iohEvents
cleanEventsMap[caseid] = cleanEvents
saveCaseSegments(caseid, positiveSegments, negativeSegments, 9, debug=debug, forceWrite=forceWrite)
#if debug:
print(f'{caseid}: positiveSegments: {len(positiveSegments)}, negativeSegments: {len(negativeSegments)}')
return positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap
Case Extraction - Generage Segments Needed For Training¶
Ensure that all needed segments are in place for the cases that are being used. If data is already stored on disk this method returns immediately.
print('Time to extract segments!')
Time to extract segments!
MANUAL_EXTRACT=True
if MANUAL_EXTRACT:
mycoi = cases_of_interest_idx
#mycoi = cases_of_interest_idx[:2800]
#mycoi = [1]
cnt = 0
mod = 0
for ci in mycoi:
cnt += 1
if mod % 100 == 0:
print(f'count processed: {mod}, current case index: {ci}')
try:
p, n, i, c = extract_segments([ci], debug=False, checkCache=True, forceWrite=True, returnSegments=False)
p = None
n = None
i = None
c = None
except:
print(f'error on extract segment: {ci}')
mod += 1
print(f'extracted: {cnt}')
count processed: 0, current case index: 1 count processed: 100, current case index: 198 count processed: 200, current case index: 431 count processed: 300, current case index: 665
724: exit early, no segments to save 724: positiveSegments: 0, negativeSegments: 0 818: exit early, no segments to save 818: positiveSegments: 0, negativeSegments: 0 count processed: 400, current case index: 853 count processed: 500, current case index: 1046 count processed: 600, current case index: 1236 1271: exit early, no segments to save 1271: positiveSegments: 0, negativeSegments: 0 count processed: 700, current case index: 1440 1505: exit early, no segments to save 1505: positiveSegments: 0, negativeSegments: 0 count processed: 800, current case index: 1639 count processed: 900, current case index: 1843 count processed: 1000, current case index: 2049 2218: exit early, no segments to save 2218: positiveSegments: 0, negativeSegments: 0 count processed: 1100, current case index: 2281 count processed: 1200, current case index: 2469 count processed: 1300, current case index: 2665 count processed: 1400, current case index: 2888 count processed: 1500, current case index: 3092 count processed: 1600, current case index: 3279 3413: exit early, no segments to save 3413: positiveSegments: 0, negativeSegments: 0 count processed: 1700, current case index: 3475 3476: exit early, no segments to save 3476: positiveSegments: 0, negativeSegments: 0 3533: exit early, no segments to save 3533: positiveSegments: 0, negativeSegments: 0 count processed: 1800, current case index: 3694 count processed: 1900, current case index: 3887 3992: exit early, no segments to save 3992: positiveSegments: 0, negativeSegments: 0 count processed: 2000, current case index: 4091 4187: nothing saved, all segments filtered 4187: positiveSegments: 0, negativeSegments: 18 count processed: 2100, current case index: 4296 4328: exit early, no segments to save 4328: positiveSegments: 0, negativeSegments: 0 count processed: 2200, current case index: 4509 4648: exit early, no segments to save 4648: positiveSegments: 0, negativeSegments: 0 4703: exit early, no segments to save 4703: positiveSegments: 0, negativeSegments: 0 count processed: 2300, current case index: 4732 4733: exit early, no segments to save 4733: positiveSegments: 0, negativeSegments: 0 4834: nothing saved, all segments filtered 4834: positiveSegments: 3, negativeSegments: 0 4836: nothing saved, all segments filtered 4836: positiveSegments: 11, negativeSegments: 6 count processed: 2400, current case index: 4929 4985: nothing saved, all segments filtered 4985: positiveSegments: 1, negativeSegments: 0 5130: exit early, no segments to save 5130: positiveSegments: 0, negativeSegments: 0 count processed: 2500, current case index: 5142 5175: nothing saved, all segments filtered 5175: positiveSegments: 2, negativeSegments: 0 5327: nothing saved, all segments filtered 5327: positiveSegments: 4, negativeSegments: 12 count processed: 2600, current case index: 5346 5501: exit early, no segments to save 5501: positiveSegments: 0, negativeSegments: 0 count processed: 2700, current case index: 5564 5587: nothing saved, all segments filtered 5587: positiveSegments: 2, negativeSegments: 0 5693: exit early, no segments to save 5693: positiveSegments: 0, negativeSegments: 0 count processed: 2800, current case index: 5771 5908: exit early, no segments to save 5908: positiveSegments: 0, negativeSegments: 0 count processed: 2900, current case index: 5974 6131: nothing saved, all segments filtered 6131: positiveSegments: 2, negativeSegments: 0 count processed: 3000, current case index: 6174 count processed: 3100, current case index: 6372 extracted: 3110
Track and Segment Validity Checks¶
def printAbp(case_id_to_check, plot_invalid_only=False):
vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
if not os.path.isfile(vf_path):
return
vf = vitaldb.VitalFile(vf_path)
abp = vf.to_numpy(TRACK_NAMES[0], 1/500)
print(f'Case {case_id_to_check}')
print(f'ABP Shape: {abp.shape}')
print(f'nanmin: {np.nanmin(abp)}')
print(f'nanmean: {np.nanmean(abp)}')
print(f'nanmax: {np.nanmax(abp)}')
is_valid = isAbpSegmentValidNumpy(abp, debug=True)
print(f'valid: {is_valid}')
if plot_invalid_only and is_valid:
return
plt.figure(figsize=(20, 5))
plt_color = 'C0' if is_valid else 'red'
plt.plot(abp, plt_color)
plt.title(f'ABP - Entire Track - Case {case_id_to_check} - {abp.shape[0] / 500} seconds')
plt.axhline(y = 65, color = 'maroon', linestyle = '--')
plt.show()
def printSegments(segmentsMap, case_id_to_check, print_label, normalize=False):
for (x1, x2, r, abp, ecg, eeg) in segmentsMap[case_id_to_check]:
print(f'{print_label}: Case {case_id_to_check}')
print(f'lookback window: {r} min')
print(f'start time: {x1}')
print(f'end time: {x2}')
print(f'length: {x2 - x1} sec')
print(f'ABP Shape: {abp.shape}')
print(f'ECG Shape: {ecg.shape}')
print(f'EEG Shape: {eeg.shape}')
print(f'nanmin: {np.nanmin(abp)}')
print(f'nanmean: {np.nanmean(abp)}')
print(f'nanmax: {np.nanmax(abp)}')
is_valid = isAbpSegmentValidNumpy(abp, debug=True)
print(f'valid: {is_valid}')
# ABP normalization
x_abp = np.copy(abp)
if normalize:
x_abp -= 65
x_abp /= 65
plt.figure(figsize=(20, 5))
plt_color = 'C0' if is_valid else 'red'
plt.plot(x_abp, plt_color)
plt.title('ABP')
plt.axhline(y = 65, color = 'maroon', linestyle = '--')
plt.show()
plt.figure(figsize=(20, 5))
plt.plot(ecg, 'teal')
plt.title('ECG')
plt.show()
plt.figure(figsize=(20, 5))
plt.plot(eeg, 'indigo')
plt.title('EEG')
plt.show()
print()
def printEvents(abp_raw, eventsMap, case_id_to_check, print_label, normalize=False):
for (x1, x2) in eventsMap[case_id_to_check]:
print(f'{print_label}: Case {case_id_to_check}')
print(f'start time: {x1}')
print(f'end time: {x2}')
print(f'length: {x2 - x1} sec')
abp = abp_raw[x1*500:x2*500]
print(f'ABP Shape: {abp.shape}')
print(f'nanmin: {np.nanmin(abp)}')
print(f'nanmean: {np.nanmean(abp)}')
print(f'nanmax: {np.nanmax(abp)}')
is_valid = isAbpSegmentValidNumpy(abp, debug=True)
print(f'valid: {is_valid}')
# ABP normalization
x_abp = np.copy(abp)
if normalize:
x_abp -= 65
x_abp /= 65
plt.figure(figsize=(20, 5))
plt_color = 'C0' if is_valid else 'red'
plt.plot(x_abp, plt_color)
plt.title('ABP')
plt.axhline(y = 65, color = 'maroon', linestyle = '--')
plt.show()
print()
Reality Check All Cases¶
# Check if all ABPs are well formed.
DISPLAY_REALITY_CHECK_ABP=True
DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY=True
if DISPLAY_REALITY_CHECK_ABP:
for case_id_to_check in cases_of_interest_idx:
printAbp(case_id_to_check, plot_invalid_only=False)
if DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY:
break
Case 1 ABP Shape: (5770575, 1) nanmin: -495.6260070800781 nanmean: 78.15251159667969 nanmax: 374.3236389160156 Presence of BP > 200 valid: False
Validate Malformed Vital Files - Missing One Or More Tracks¶
# These are Vital Files removed because of malformed ABP waveforms.
DISPLAY_MALFORMED_ABP=True
DISPLAY_MALFORMED_ABP_FIRST_ONLY=True
if DISPLAY_MALFORMED_ABP:
malformed_case_ids = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index
for case_id_to_check in malformed_case_ids:
printAbp(case_id_to_check)
if DISPLAY_MALFORMED_ABP_FIRST_ONLY:
break
Case 3 ABP Shape: (2197020, 1) nanmin: nan nanmean: nan nanmax: nan >10% NaN valid: False
/var/folders/vz/d4v0jn551nb7dpr_m204kt100000gn/T/ipykernel_65449/2874601318.py:13: RuntimeWarning: All-NaN slice encountered
print(f'nanmin: {np.nanmin(abp)}')
/var/folders/vz/d4v0jn551nb7dpr_m204kt100000gn/T/ipykernel_65449/2874601318.py:14: RuntimeWarning: Mean of empty slice
print(f'nanmean: {np.nanmean(abp)}')
/var/folders/vz/d4v0jn551nb7dpr_m204kt100000gn/T/ipykernel_65449/2874601318.py:15: RuntimeWarning: All-NaN slice encountered
print(f'nanmax: {np.nanmax(abp)}')
Validate Cases With No Segments Saved¶
DISPLAY_NO_SEGMENTS_CASES=True
DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY=True
if DISPLAY_NO_SEGMENTS_CASES:
no_segments_case_ids = [3413, 3476, 3533, 3992, 4328, 4648, 4703, 4733, 5130, 5501, 5693, 5908]
for case_id_to_check in no_segments_case_ids:
printAbp(case_id_to_check)
if DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY:
break
Case 3413 ABP Shape: (3429927, 1) nanmin: -228.025146484375 nanmean: 48.4425163269043 nanmax: 293.3521423339844 >10% NaN valid: False
Select Case For Segment Extraction Validation¶
Generate segment data for one or more cases.
#mycoi = cases_of_interest_idx
mycoi = cases_of_interest_idx[:1]
#mycoi = [1]
positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = \
extract_segments(mycoi, debug=False, checkCache=False, forceWrite=False, returnSegments=True)
1: positiveSegments: 12, negativeSegments: 9
Select a specific case to check.
case_id_to_check = cases_of_interest_idx[0]
#case_id_to_check = 1
print(case_id_to_check)
1
print((
len(positiveSegmentsMap[case_id_to_check]),
len(negativeSegmentsMap[case_id_to_check]),
len(iohEventsMap[case_id_to_check]),
len(cleanEventsMap[case_id_to_check])
))
(12, 9, 7, 3)
printAbp(case_id_to_check)
Case 1 ABP Shape: (5770575, 1) nanmin: -495.6260070800781 nanmean: 78.15251159667969 nanmax: 374.3236389160156 Presence of BP > 200 valid: False
Positive Segments for Case - IOH Events¶
printSegments(positiveSegmentsMap, case_id_to_check, 'Positive Segment - IOH Event', normalize=False)
Positive Segment - IOH Event: Case 1 lookback window: 3 min start time: 1548 end time: 1608 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 46.487884521484375 nanmean: 73.00869750976562 nanmax: 113.63497924804688 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 5 min start time: 1428 end time: 1488 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 41.550628662109375 nanmean: 74.47395324707031 nanmax: 128.44686889648438 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 10 min start time: 1128 end time: 1188 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 53.400115966796875 nanmean: 88.63211059570312 nanmax: 135.35903930664062 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 15 min start time: 828 end time: 888 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 23.776397705078125 nanmean: 108.88127136230469 nanmax: 182.75698852539062 Presence of BP < 30 valid: False
Positive Segment - IOH Event: Case 1 lookback window: 3 min start time: 3873 end time: 3933 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 46.487884521484375 nanmean: 75.3544692993164 nanmax: 124.49703979492188 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 5 min start time: 3753 end time: 3813 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 45.500457763671875 nanmean: 73.97709655761719 nanmax: 122.52212524414062 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 10 min start time: 3453 end time: 3513 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 52.412628173828125 nanmean: 86.52787780761719 nanmax: 148.19595336914062 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 15 min start time: 3153 end time: 3213 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 58.337371826171875 nanmean: 100.94121551513672 nanmax: 165.97018432617188 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 3 min start time: 8856 end time: 8916 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 64.26211547851562 nanmean: 97.06536102294922 nanmax: 157.08309936523438 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 5 min start time: 8736 end time: 8796 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 69.19943237304688 nanmean: 105.55238342285156 nanmax: 163.00784301757812 valid: True
Positive Segment - IOH Event: Case 1 lookback window: 10 min start time: 8436 end time: 8496 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: -88.793701171875 nanmean: 130.62982177734375 nanmax: 305.2016296386719 Presence of BP > 200 valid: False
Positive Segment - IOH Event: Case 1 lookback window: 15 min start time: 8136 end time: 8196 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 62.287200927734375 nanmean: 92.04357147216797 nanmax: 138.32138061523438 valid: True
Negative Segments for Case - Non Events¶
printSegments(negativeSegmentsMap, case_id_to_check, 'Negative Segment - Non-Event', normalize=False)
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 5951 end time: 6011 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 52.412628173828125 nanmean: 76.35643005371094 nanmax: 120.54721069335938 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 6251 end time: 6311 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 54.387542724609375 nanmean: 77.73150634765625 nanmax: 120.54721069335938 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 6551 end time: 6611 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 58.337371826171875 nanmean: 85.06976318359375 nanmax: 133.38412475585938 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 7752 end time: 7812 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 55.375030517578125 nanmean: 80.11844635009766 nanmax: 130.42178344726562 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 8052 end time: 8112 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 60.312286376953125 nanmean: 88.32589721679688 nanmax: 134.37161254882812 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 8352 end time: 8412 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 68.21194458007812 nanmean: 182.59963989257812 nanmax: 368.3988952636719 Presence of BP > 200 valid: False
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 10104 end time: 10164 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 48.462799072265625 nanmean: 72.81173706054688 nanmax: 115.60989379882812 valid: True
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 10404 end time: 10464 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: -7.822235107421875 nanmean: 106.73753356933594 nanmax: 236.07968139648438 Presence of BP > 200 valid: False
Negative Segment - Non-Event: Case 1 lookback window: 0 min start time: 10704 end time: 10764 length: 60 sec ABP Shape: (30000,) ECG Shape: (30000,) EEG Shape: (7680,) nanmin: 110.67263793945312 nanmean: 172.22396850585938 nanmax: 239.04202270507812 Presence of BP > 200 valid: False
IOH Event Segments for Case - Positive Segments Identified From These¶
tmp_vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
tmp_vf = vitaldb.VitalFile(tmp_vf_path)
tmp_abp = tmp_vf.to_numpy(TRACK_NAMES[0], 1/500)
printEvents(tmp_abp, iohEventsMap, case_id_to_check, 'IOH Event Segment', normalize=False)
IOH Event Segment: Case 1 start time: 1788 end time: 1849 length: 61 sec ABP Shape: (30500, 1) nanmin: 32.663482666015625 nanmean: 64.93988037109375 nanmax: 123.50955200195312 valid: True
IOH Event Segment: Case 1 start time: 1850 end time: 2113 length: 263 sec ABP Shape: (131500, 1) nanmin: 37.600799560546875 nanmean: 63.139060974121094 nanmax: 101.78549194335938 valid: True
IOH Event Segment: Case 1 start time: 2314 end time: 2375 length: 61 sec ABP Shape: (30500, 1) nanmin: -262.5861511230469 nanmean: 65.14369201660156 nanmax: 343.7124938964844 Presence of BP > 200 valid: False
IOH Event Segment: Case 1 start time: 4113 end time: 4199 length: 86 sec ABP Shape: (43000, 1) nanmin: 22.788909912109375 nanmean: 65.0725326538086 nanmax: 153.13327026367188 Presence of BP < 30 valid: False
IOH Event Segment: Case 1 start time: 4261 end time: 5350 length: 1089 sec ABP Shape: (544500, 1) nanmin: 36.613311767578125 nanmean: 60.451026916503906 nanmax: 110.67263793945312 valid: True
IOH Event Segment: Case 1 start time: 9096 end time: 9156 length: 60 sec ABP Shape: (30000, 1) nanmin: 40.563140869140625 nanmean: 64.9837646484375 nanmax: 108.69772338867188 valid: True
IOH Event Segment: Case 1 start time: 9157 end time: 9503 length: 346 sec ABP Shape: (173000, 1) nanmin: 39.575714111328125 nanmean: 62.33021545410156 nanmax: 104.74789428710938 valid: True
Clean Event Segments for Case - Negative Segments Identified From These¶
printEvents(tmp_abp, cleanEventsMap, case_id_to_check, 'Clean Event Segment', normalize=False)
Clean Event Segment: Case 1 start time: 5351 end time: 7151 length: 1800 sec ABP Shape: (900000, 1) nanmin: 40.563140869140625 nanmean: 84.04818725585938 nanmax: 151.15835571289062 valid: True
Clean Event Segment: Case 1 start time: 7152 end time: 8952 length: 1800 sec ABP Shape: (900000, 1) nanmin: -495.6260070800781 nanmean: 99.71124267578125 nanmax: 368.3988952636719 Presence of BP > 200 valid: False
Clean Event Segment: Case 1 start time: 9504 end time: 11304 length: 1800 sec ABP Shape: (900000, 1) nanmin: -49.295440673828125 nanmean: 83.3201675415039 nanmax: 346.6748352050781 Presence of BP > 200 valid: False
# free memory
tmp_abp = None
Generate Train/Val/Test Splits¶
def get_segment_attributes_from_filename(file_path):
pieces = os.path.basename(file_path).split('_')
case = int(pieces[0])
startX = int(pieces[1])
predWindow = int(pieces[2])
label = pieces[3].replace('.h5', '')
return (case, startX, predWindow, label)
count_negative_samples = 0
count_positive_samples = 0
samples = []
from glob import glob
seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}"
filenames = [y for x in os.walk(seg_folder) for y in glob(os.path.join(x[0], '*.h5'))]
for filename in filenames:
(case, start_x, pred_window, label) = get_segment_attributes_from_filename(filename)
#print((case, start_x, pred_window, label))
# only load cases for cases of interest; this folder could have segments for hundreds of cases
if case not in cases_of_interest_idx:
continue
#PREDICTION_WINDOW = 3
if pred_window == 0 or pred_window == PREDICTION_WINDOW or PREDICTION_WINDOW == 'ALL':
#print((case, start_x, pred_window, label))
if label == 'True':
count_positive_samples += 1
else:
count_negative_samples += 1
sample = (filename, label)
samples.append(sample)
print()
print(f"samples loaded: {len(samples):5} ")
print(f'count negative samples: {count_negative_samples:5}')
print(f'count positive samples: {count_positive_samples:5}')
samples loaded: 45357 count negative samples: 37205 count positive samples: 8152
# Divide by cases
sample_cases = defaultdict(lambda: [])
for fn, _ in samples:
(case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
sample_cases[case].append((fn, label))
# understand any missing cases of interest
sample_cases_idx = pd.Index(sample_cases.keys())
missing_case_ids = cases_of_interest_idx.difference(sample_cases_idx)
print(f'cases with no samples: {missing_case_ids.shape[0]}')
print(f' {missing_case_ids}')
print()
# Split data into training, validation, and test sets
# Use 6:1:3 ratio and prevent samples from a single case from being split across different sets
# Note: number of samples at each time point is not the same, because the first event can occur before the 3/5/10/15 minute mark
# Set target sizes
train_ratio = 0.6
val_ratio = 0.1
test_ratio = 1 - train_ratio - val_ratio # ensure ratios sum to 1
# Split samples into train and other
sample_cases_train, sample_cases_other = train_test_split(list(sample_cases.keys()), test_size=(1 - train_ratio), random_state=RANDOM_SEED)
# Split other into val and test
sample_cases_val, sample_cases_test = train_test_split(sample_cases_other, test_size=(test_ratio / (1 - train_ratio)), random_state=RANDOM_SEED)
# Check how many samples are in each set
print(f'Train/Val/Test Summary by Cases')
print(f"Train cases: {len(sample_cases_train):5}, ({len(sample_cases_train) / len(sample_cases):.2%})")
print(f"Val cases: {len(sample_cases_val):5}, ({len(sample_cases_val) / len(sample_cases):.2%})")
print(f"Test cases: {len(sample_cases_test):5}, ({len(sample_cases_test) / len(sample_cases):.2%})")
print(f"Total cases: {(len(sample_cases_train) + len(sample_cases_val) + len(sample_cases_test)):5}")
cases with no samples: 27
Index([ 124, 724, 818, 1271, 1505, 2111, 2218, 3413, 3476, 3533, 3992, 4187,
4328, 4648, 4703, 4733, 4834, 4836, 4985, 5130, 5175, 5327, 5501, 5587,
5693, 5908, 6131],
dtype='int64')
Train/Val/Test Summary by Cases
Train cases: 1849, (59.97%)
Val cases: 308, (9.99%)
Test cases: 926, (30.04%)
Total cases: 3083
sample_cases_train = set(sample_cases_train)
sample_cases_val = set(sample_cases_val)
sample_cases_test = set(sample_cases_test)
samples_train = []
samples_val = []
samples_test = []
for cid, segs in sample_cases.items():
if cid in sample_cases_train:
for seg in segs:
samples_train.append(seg)
if cid in sample_cases_val:
for seg in segs:
samples_val.append(seg)
if cid in sample_cases_test:
for seg in segs:
samples_test.append(seg)
# Check how many samples are in each set
print(f'Train/Val/Test Summary by Events')
print(f"Train events: {len(samples_train):5}, ({len(samples_train) / len(samples):.2%})")
print(f"Val events: {len(samples_val):5}, ({len(samples_val) / len(samples):.2%})")
print(f"Test events: {len(samples_test):5}, ({len(samples_test) / len(samples):.2%})")
print(f"Total events: {(len(samples_train) + len(samples_val) + len(samples_test)):5}")
Train/Val/Test Summary by Events Train events: 27083, (59.71%) Val events: 4506, (9.93%) Test events: 13768, (30.35%) Total events: 45357
Validate train/val/test Splits¶
PRINT_ALL_CASE_SPLIT_DETAILS = False
case_to_sample_distribution = defaultdict(lambda: {'train': [0, 0], 'val': [0, 0], 'test': [0, 0]})
def populate_case_to_sample_distribution(mysamples, idx):
neg = 0
pos = 0
for fn, _ in mysamples:
(case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
slot = 0 if label == 'False' else 1
case_to_sample_distribution[case][idx][slot] += 1
if slot == 0:
neg += 1
else:
pos += 1
return (neg, pos)
train_neg, train_pos = populate_case_to_sample_distribution(samples_train, 'train')
val_neg, val_pos = populate_case_to_sample_distribution(samples_val, 'val')
test_neg, test_pos = populate_case_to_sample_distribution(samples_test, 'test')
print(f'Total Cases Present: {len(case_to_sample_distribution):5}')
print()
train_tot = train_pos + train_neg
val_tot = val_pos + val_neg
test_tot = test_pos + test_neg
print(f'Train: P: {train_pos:5} ({(train_pos/train_tot):.2}), N: {train_neg:5} ({(train_neg/train_tot):.2})')
print(f'Val: P: {val_pos:5} ({(val_pos/val_tot):.2}), N: {val_neg:5} ({(val_neg/val_tot):.2})')
print(f'Test: P: {test_pos:5} ({(test_pos/test_tot):.2}), N: {test_neg:5} ({(test_neg/test_tot):.2})')
print()
total_pos = train_pos + val_pos + test_pos
total_neg = train_neg + val_neg + test_neg
total = total_pos + total_neg
print(f'P/N Ratio: {(total_pos)}:{(total_neg)}')
print(f'P Percent: {(total_pos/total):.2}')
print(f'N Percent: {(total_neg/total):.2}')
print()
if PRINT_ALL_CASE_SPLIT_DETAILS:
for ci in sorted(case_to_sample_distribution.keys()):
print(f'{ci}: {case_to_sample_distribution[ci]}')
Total Cases Present: 3083 Train: P: 4848 (0.18), N: 22235 (0.82) Val: P: 828 (0.18), N: 3678 (0.82) Test: P: 2476 (0.18), N: 11292 (0.82) P/N Ratio: 8152:37205 P Percent: 0.18 N Percent: 0.82
def check_data_leakage(full_data, train_data, val_data, test_data):
# Convert to sets for easier operations
full_data_set = set(full_data)
train_data_set = set(train_data)
val_data_set = set(val_data)
test_data_set = set(test_data)
# Check if train, val, test are subsets of full_data
if not train_data_set.issubset(full_data_set):
return "Train data has leakage"
if not val_data_set.issubset(full_data_set):
return "Validation data has leakage"
if not test_data_set.issubset(full_data_set):
return "Test data has leakage"
# Check if train, val, test are disjoint
if train_data_set & val_data_set:
return "Train and validation data are not disjoint"
if train_data_set & test_data_set:
return "Train and test data are not disjoint"
if val_data_set & test_data_set:
return "Validation and test data are not disjoint"
return "No data leakage detected"
# Usage
print(check_data_leakage(list(sample_cases.keys()), sample_cases_train, sample_cases_val, sample_cases_test))
No data leakage detected
# Create vitalDataset class
class vitalDataset(Dataset):
def __init__(self, samples, normalize_abp=False):
self.samples = samples
self.normalize_abp = normalize_abp
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
# Get metadata for this event
segment = self.samples[idx]
file_path = segment[0]
label = (segment[1] == "True" or segment[1] == "True.vital")
(abp, ecg, eeg) = get_segment_data(file_path)
if abp is None or eeg is None or ecg is None:
return (np.zeros(30000), np.zeros(30000), np.zeros(7680), 0)
if self.normalize_abp:
abp -= 65
abp /= 65
return abp, ecg, eeg, label
NORMALIZE_ABP = False
train_dataset = vitalDataset(samples_train, NORMALIZE_ABP)
val_dataset = vitalDataset(samples_val, NORMALIZE_ABP)
test_dataset = vitalDataset(samples_test, NORMALIZE_ABP)
Classification Studies¶
Check if data can be easily classified using non-deep learning methods. Create a balanced sample of IOH and non-IOH events and use a simple classifier to see if the data can be easily separated. Datasets which can be easily separated by non-deep learning methods should also be easily classified by deep learning models.
MAX_CLASSIFICATION_SAMPLES = 250
MAX_SAMPLE_SIZE = 1600
classification_sample_size = MAX_SAMPLE_SIZE if len(samples) >= MAX_SAMPLE_SIZE else len(samples)
classification_samples = random.sample(samples, classification_sample_size)
positive_samples = []
negative_samples = []
for sample in classification_samples:
(sampleAbp, sampleEcg, sampleEeg) = get_segment_data(sample[0])
if sample[1] == "True":
positive_samples.append([sample[0], True, sampleAbp, sampleEcg, sampleEeg])
else:
negative_samples.append([sample[0], False, sampleAbp, sampleEcg, sampleEeg])
positive_samples = pd.DataFrame(positive_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])
negative_samples = pd.DataFrame(negative_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])
total_to_sample_pos = MAX_CLASSIFICATION_SAMPLES if len(positive_samples) >= MAX_CLASSIFICATION_SAMPLES else len(positive_samples)
total_to_sample_neg = MAX_CLASSIFICATION_SAMPLES if len(negative_samples) >= MAX_CLASSIFICATION_SAMPLES else len(negative_samples)
# Select up to 150 random samples where segment_label is True
positive_samples = positive_samples.sample(total_to_sample_pos, random_state=RANDOM_SEED)
# Select up to 150 random samples where segment_label is False
negative_samples = negative_samples.sample(total_to_sample_neg, random_state=RANDOM_SEED)
print(f'positive_samples: {len(positive_samples)}')
print(f'negative_samples: {len(negative_samples)}')
# Combine the positive and negative samples
samples_balanced = pd.concat([positive_samples, negative_samples])
positive_samples: 250 negative_samples: 250
Define function to build data for study. Each waveform field can be enabled or disabled:
def get_x_y(samples, use_abp, use_ecg, use_eeg):
# Create X and y, using data from `samples_balanced` and the `use_abp`, `use_ecg`, and `use_eeg` variables
X = []
y = []
for i in range(len(samples)):
row = samples.iloc[i]
sample = np.array([])
if use_abp:
if len(row['segment_abp']) != 30000:
print(len(row['segment_abp']))
sample = np.append(sample, row['segment_abp'])
if use_ecg:
if len(row['segment_ecg']) != 30000:
print(len(row['segment_ecg']))
sample = np.append(sample, row['segment_ecg'])
if use_eeg:
if len(row['segment_eeg']) != 7680:
print(len(row['segment_eeg']))
sample = np.append(sample, row['segment_eeg'])
X.append(sample)
# Convert the label from boolean to 0 or 1
y.append(int(row['segment_label']))
return X, y
KNN¶
Define KNN run. This is configurable to enable or disable different data channels so that we can study them individually or together:
N_NEIGHBORS = 20
def run_knn(samples, use_abp, use_ecg, use_eeg):
# Get samples
X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)
# Split samples into train and val
knn_X_train, knn_X_test, knn_y_train, knn_y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_SEED)
# Normalize the data
scaler = StandardScaler()
scaler.fit(knn_X_train)
knn_X_train = scaler.transform(knn_X_train)
knn_X_test = scaler.transform(knn_X_test)
# Initialize the KNN classifier
knn = KNeighborsClassifier(n_neighbors=N_NEIGHBORS)
# Train the KNN classifier
knn.fit(knn_X_train, knn_y_train)
# Make predictions on the test set
knn_y_pred = knn.predict(knn_X_test)
# Evaluate the KNN classifier
print(f"ABP: {use_abp}, ECG: {use_ecg}, EEG: {use_eeg}")
print(f"Confusion matrix:\n{confusion_matrix(knn_y_test, knn_y_pred)}")
print(f"Classification report:\n{classification_report(knn_y_test, knn_y_pred)}")
Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)
ABP: True, ECG: False, EEG: False
Confusion matrix:
[[44 10]
[23 23]]
Classification report:
precision recall f1-score support
0 0.66 0.81 0.73 54
1 0.70 0.50 0.58 46
accuracy 0.67 100
macro avg 0.68 0.66 0.65 100
weighted avg 0.68 0.67 0.66 100
ABP: False, ECG: True, EEG: False
Confusion matrix:
[[54 0]
[46 0]]
Classification report:
precision recall f1-score support
0 0.54 1.00 0.70 54
1 0.00 0.00 0.00 46
accuracy 0.54 100
macro avg 0.27 0.50 0.35 100
weighted avg 0.29 0.54 0.38 100
ABP: False, ECG: False, EEG: True
Confusion matrix:
[[ 3 51]
[ 2 44]]
Classification report:
precision recall f1-score support
0 0.60 0.06 0.10 54
1 0.46 0.96 0.62 46
accuracy 0.47 100
macro avg 0.53 0.51 0.36 100
weighted avg 0.54 0.47 0.34 100
/Users/maciej/.local/share/virtualenvs/cs598-dlh-project-ZfsHlfot/lib/python3.12/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
/Users/maciej/.local/share/virtualenvs/cs598-dlh-project-ZfsHlfot/lib/python3.12/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
/Users/maciej/.local/share/virtualenvs/cs598-dlh-project-ZfsHlfot/lib/python3.12/site-packages/sklearn/metrics/_classification.py:1509: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, f"{metric.capitalize()} is", len(result))
ABP: True, ECG: False, EEG: True
Confusion matrix:
[[43 11]
[20 26]]
Classification report:
precision recall f1-score support
0 0.68 0.80 0.74 54
1 0.70 0.57 0.63 46
accuracy 0.69 100
macro avg 0.69 0.68 0.68 100
weighted avg 0.69 0.69 0.69 100
ABP: True, ECG: True, EEG: True
Confusion matrix:
[[50 4]
[29 17]]
Classification report:
precision recall f1-score support
0 0.63 0.93 0.75 54
1 0.81 0.37 0.51 46
accuracy 0.67 100
macro avg 0.72 0.65 0.63 100
weighted avg 0.71 0.67 0.64 100
Based on the data above, the ABP data alone is strongly predictive based on the macro average F1-score of 0.90. The ECG and EEG data are weakly predictive with F1 scores of 0.33 and 0.64, respectively. The ABP+EEG data is also strongly predictive with an F1 score of 0.88, and ABP+ECG+EEG data somewhat predictive with an F1 score of 0.79.
Models based on ABP data alone, or ABP+EEG data are expected to train easily with good performance. The other signals appear to mostly add noise and are not strongly predictive. This agrees with the results from the paper.
t-SNE¶
Define t-SNE run. This is configurable to enable or disable different data channels so that we can study them individually or together:
def run_tsne(samples, use_abp, use_ecg, use_eeg):
# Get samples
X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)
# Convert X and y to numpy arrays
X = np.array(X)
y = np.array(y)
# Run t-SNE on the samples
tsne = TSNE(n_components=len(np.unique(y)), random_state=RANDOM_SEED)
X_tsne = tsne.fit_transform(X)
# Create a scatter plot of the t-SNE representation
plt.figure(figsize=(16, 9))
plt.title(f"use_abp={use_abp}, use_ecg={use_ecg}, use_eeg={use_eeg}")
for i, label in enumerate(set(y)):
plt.scatter(X_tsne[y == label, 0], X_tsne[y == label, 1], label=label)
plt.legend()
plt.show()
Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)
Based on the plots above, it appears that ABP alone, ABP+EEG and ABP+ECG+EEG are somewhat separable, though with outliers, and should be trainable by our model. The ECG and EEG data are not easily separable from the other data. This agrees with the results from the paper.
# cleanup
del samples_balanced
Model¶
The model implementation is based on the CNN architecture described in Jo Y-Y et al. (2022). It is designed to handle 1, 2, or 3 signal categories simultaneously, allowing for flexible model configurations based on different combinations of physiological signals:
- ABP alone
- EEG alone
- ECG alone
- ABP + EEG
- ABP + ECG
- EEG + ECG
- ABP + EEG + ECG
Model Architecture¶
The architecture, as depicted in Figure 2 from the original paper, utilizes a ResNet-based approach tailored for time-series data from different physiological signals. The model architecture is adapted to handle varying input signal frequencies, with specific hyperparameters for each signal type, particularly EEG, due to its distinct characteristics compared to ABP and ECG. A diagram of the model architecture is shown below:
Each input signal is processed through a sequence of 12 7-layer residual blocks, followed by a flattening process and a linear transformation to produce a 32-dimensional feature vector per signal type. These vectors are then concatenated (if multiple signals are used) and passed through two additional linear layers to produce a single output vector, representing the IOH index. A threshold is determined experimentally in order to minimize the differene between the sensitivity and specificity and is applied to this index to perform binary classification for predicting IOH events.
The hyperparameters for the residual blocks are specified in Supplemental Table 1 from the original paper and vary for different signal type.
A forward pass through the model passes through 85 layers before concatenation, followed by two more linear layers and finally a sigmoid activation layer to produce the prediction measure.
Residual Block Definition¶
Each residual block consists of the following seven layers:
- Batch normalization
- ReLU
- Dropout (0.5)
- 1D convolution
- Batch normalization
- ReLU
- 1D convolution
Skip connections are included to aid in gradient flow during training, with optional 1D convolution in the skip connection to align dimensions.
Residual Block Hyperparameters¶
The hyperparameters are detailed in Supplemental Table 1 of the original paper. A screenshot of these hyperparameters is provided for reference below:

Note: Please be aware of a transcription error in the original paper's Supplemental Table 1 for the ECG+ABP configuration in Residual Blocks 11 and 12, where the output size should be 469 * 6 instead of the reported 496 * 6.
Training Objectives¶
Our model uses binary cross entropy as the loss function and Adam as the optimizer, consistent with the original study. The learning rate is set at 0.0001, and training is configured to run for up to 100 epochs, with early stopping implemented if no improvement in loss is observed over five consecutive epochs.
# First define the residual block which is reused 12x for each data track for each sample.
# Second define the primary model.
class ResidualBlock(nn.Module):
def __init__(self, in_features: int, out_features: int, in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, size_down: bool = False, ignoreSkipConnection: bool = False) -> None:
super(ResidualBlock, self).__init__()
self.ignoreSkipConnection = ignoreSkipConnection
# calculate the appropriate padding required to ensure expected sequence lengths out of each residual block
padding = int((((stride-1)*in_features)-stride+kernel_size)/2)
self.size_down = size_down
self.bn1 = nn.BatchNorm1d(in_channels)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.5)
self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
self.bn2 = nn.BatchNorm1d(out_channels)
self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
self.residualConv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
# unclear where in sequence this should take place. Size down expressed in Supplemental table S1
if self.size_down:
pool_padding = (1 if (in_features % 2 > 0) else 0)
self.downsample = nn.MaxPool1d(kernel_size=2, stride=2, padding = pool_padding)
def forward(self, x: torch.Tensor) -> torch.Tensor:
identity = x
out = self.bn1(x)
out = self.relu(out)
out = self.dropout(out)
out = self.conv1(out)
if self.size_down:
out = self.downsample(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv2(out)
if not self.ignoreSkipConnection:
if out.shape != identity.shape:
# run the residual through a convolution when necessary
identity = self.residualConv(identity)
outlen = np.prod(out.shape)
idlen = np.prod(identity.shape)
# downsample when required
if idlen > outlen:
identity = self.downsample(identity)
# match dimensions
identity = identity.reshape(out.shape)
# add the residual
out += identity
return out
class HypotensionCNN(nn.Module):
def __init__(self, useAbp: bool = True, useEeg: bool = False, useEcg: bool = False, device: str = "cpu", nResiduals: int = 12, ignoreSkipConnection: bool = False, useSigmoid: bool = True) -> None:
assert useAbp or useEeg or useEcg, "At least one data track must be used"
assert nResiduals > 0 and nResiduals <= 12, "Number of residual blocks must be between 1 and 12"
super(HypotensionCNN, self).__init__()
self.device = device
self.useAbp = useAbp
self.useEeg = useEeg
self.useEcg = useEcg
self.nResiduals = nResiduals
self.useSigmoid = useSigmoid
# Size of the concatenated output from the residual blocks
concatSize = 0
if useAbp:
self.abpBlocks = []
self.abpMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
self.abpSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]
for i in range(self.nResiduals):
downsample = i % 2 == 0
self.abpBlocks.append(ResidualBlock(self.abpSizes[i], self.abpSizes[i+1], self.abpMultipliers[i], self.abpMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
self.abpResiduals = nn.Sequential(*self.abpBlocks)
self.abpFc = nn.Linear(self.abpMultipliers[self.nResiduals] * self.abpSizes[self.nResiduals], 32)
concatSize += 32
if useEcg:
self.ecgBlocks = []
self.ecgMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
self.ecgSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]
for i in range(self.nResiduals):
downsample = i % 2 == 0
self.ecgBlocks.append(ResidualBlock(self.ecgSizes[i], self.ecgSizes[i+1], self.ecgMultipliers[i], self.ecgMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
self.ecgResiduals = nn.Sequential(*self.ecgBlocks)
self.ecgFc = nn.Linear(self.ecgMultipliers[self.nResiduals] * self.ecgSizes[self.nResiduals], 32)
concatSize += 32
if useEeg:
self.eegBlocks = []
self.eegMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
self.eegSizes = [7680, 3840, 3840, 1920, 1920, 960, 960, 480, 480, 240, 240, 120, 120]
for i in range(self.nResiduals):
downsample = i % 2 == 0
self.eegBlocks.append(ResidualBlock(self.eegSizes[i], self.eegSizes[i+1], self.eegMultipliers[i], self.eegMultipliers[i+1], 7 if i < 6 else 3, 1, downsample, ignoreSkipConnection))
self.eegResiduals = nn.Sequential(*self.eegBlocks)
self.eegFc = nn.Linear(self.eegMultipliers[self.nResiduals] * self.eegSizes[self.nResiduals], 32)
concatSize += 32
self.fullLinear1 = nn.Linear(concatSize, 16)
self.fullLinear2 = nn.Linear(16, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, abp: torch.Tensor, eeg: torch.Tensor, ecg: torch.Tensor) -> torch.Tensor:
batchSize = len(abp)
# conditionally operate ABP, EEG, and ECG networks
tensors = []
if self.useAbp:
self.abpResiduals.to(self.device)
abp = self.abpResiduals(abp)
totalLen = np.prod(abp.shape)
abp = torch.reshape(abp, (batchSize, int(totalLen / batchSize)))
abp = self.abpFc(abp)
tensors.append(abp)
if self.useEeg:
self.eegResiduals.to(self.device)
eeg = self.eegResiduals(eeg)
totalLen = np.prod(eeg.shape)
eeg = torch.reshape(eeg, (batchSize, int(totalLen / batchSize)))
eeg = self.eegFc(eeg)
tensors.append(eeg)
if self.useEcg:
self.ecgResiduals.to(self.device)
ecg = self.ecgResiduals(ecg)
totalLen = np.prod(ecg.shape)
ecg = torch.reshape(ecg, (batchSize, int(totalLen / batchSize)))
ecg = self.ecgFc(ecg)
tensors.append(ecg)
# concatenate the tensors along dimension 1 if there's more than one, otherwise use the single tensor
merged = torch.cat(tensors, dim=1) if len(tensors) > 1 else tensors[0]
totalLen = np.prod(merged.shape)
merged = torch.reshape(merged, (batchSize, int(totalLen / batchSize)))
out = self.fullLinear1(merged)
out = self.fullLinear2(out)
if self.useSigmoid:
out = self.sigmoid(out)
# We should not be seeing NaNs! If we are, there is a problem upstream.
#out = torch.nan_to_num(out)
return out
Training¶
As discussed earlier, our model uses binary cross entropy as the loss function and Adam as the optimizer, consistent with the original study. The learning rate is set at 0.0001, and training is configured to run for up to 100 epochs, with early stopping implemented if no improvement in loss is observed over five consecutive epochs.
def train_model_one_iter(model, device, loss_func, optimizer, train_loader):
model.train()
train_losses = []
for abp, ecg, eeg, label in tqdm(train_loader):
batch = len(abp)
abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
label = label.type(torch.float).reshape(batch, 1).to(device)
optimizer.zero_grad()
mdl = model(abp, eeg, ecg)
loss = loss_func(torch.nan_to_num(mdl), label)
loss.backward()
optimizer.step()
train_losses.append(loss.cpu().data.numpy())
return np.mean(train_losses)
def evaluate_model(model, loss_func, val_loader):
model.eval()
val_losses = []
for abp, ecg, eeg, label in tqdm(val_loader):
batch = len(abp)
abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
label = label.type(torch.float).reshape(batch, 1).to(device)
mdl = model(abp, eeg, ecg)
loss = loss_func(torch.nan_to_num(mdl), label)
val_losses.append(loss.cpu().data.numpy())
return np.mean(val_losses)
def plot_losses(train_losses, val_losses, best_epoch, experimentName):
print()
print(f'Plot Validation and Loss Values from Training')
# Create x-axis values for epochs
epochs = range(0, len(train_losses))
plt.figure(figsize=(16, 9))
# Plot the training and validation losses
plt.plot(epochs, train_losses, 'b', label='Training Loss')
plt.plot(epochs, val_losses, 'r', label='Validation Loss')
# Add a vertical bar at the best_epoch
plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch')
# Shade everything to the right of the best_epoch a light red
plt.axvspan(best_epoch, max(epochs), facecolor='r', alpha=0.1)
# Add labels and title
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(experimentName)
# Add legend
plt.legend(loc='upper right')
# Show the plot
plt.show()
def eval_model(model, device, dataloader, loss_func, print_detailed: bool = False):
model.eval()
model = model.to(device)
total_loss = 0
all_predictions = []
all_labels = []
with torch.no_grad():
for abp, ecg, eeg, label in tqdm(dataloader):
batch = len(abp)
abp = torch.nan_to_num(abp.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
ecg = torch.nan_to_num(ecg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
eeg = torch.nan_to_num(eeg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
label = label.type(torch.float).reshape(batch, 1).to(device)
pred = model(abp, eeg, ecg)
loss = loss_func(pred, label)
total_loss += loss.item()
all_predictions.append(pred.detach().cpu().numpy())
all_labels.append(label.detach().cpu().numpy())
# Flatten the lists
all_predictions = np.concatenate(all_predictions).flatten()
all_labels = np.concatenate(all_labels).flatten()
# Calculate AUROC and AUPRC
# y_true, y_pred
auroc = roc_auc_score(all_labels, all_predictions)
precision, recall, _ = precision_recall_curve(all_labels, all_predictions)
auprc = auc(recall, precision)
# Determine the optimal threshold, which is argmin(abs(sensitivity - specificity)) per the paper
thresholds = np.linspace(0, 1, 101) # 0 to 1 in 0.01 steps
min_diff = float('inf')
optimal_sensitivity = None
optimal_specificity = None
optimal_threshold = None
for threshold in thresholds:
all_predictions_binary = (all_predictions > threshold).astype(int)
tn, fp, fn, tp = confusion_matrix(all_labels, all_predictions_binary).ravel()
sensitivity = tp / (tp + fn)
specificity = tn / (tn + fp)
diff = abs(sensitivity - specificity)
if diff < min_diff:
min_diff = diff
optimal_threshold = threshold
optimal_sensitivity = sensitivity
optimal_specificity = specificity
avg_loss = total_loss / len(dataloader)
if print_detailed:
print(f"Predictions: {all_predictions}")
print(f"Labels: {all_labels}")
print(f"Loss: {avg_loss}")
print(f"AUROC: {auroc}")
print(f"AUPRC: {auprc}")
print(f"Sensitivity: {optimal_sensitivity}")
print(f"Specificity: {optimal_specificity}")
print(f"Threshold: {optimal_threshold}")
return all_predictions, all_labels, avg_loss, auroc, auprc, optimal_sensitivity, optimal_specificity, optimal_threshold
def print_all_evals(model, models, device, val_loader, test_loader, loss_func, print_detailed: bool = False):
print()
print(f'Generate AUROC/AUPRC for Each Intermediate Model')
print()
val_aurocs = []
val_auprcs = []
test_aurocs = []
test_auprcs = []
for mod in models:
model.load_state_dict(torch.load(mod))
model.train(False)
print(f'Intermediate Model:')
print(f' {mod}')
# validation loop
print("AUROC/AUPRC on Validation Data")
_, _, _, valid_auroc, valid_auprc, _, _, _ = \
eval_model(model, device, val_loader, loss_func, print_detailed)
val_aurocs.append(valid_auroc)
val_auprcs.append(valid_auprc)
print()
# test loop
print("AUROC/AUPRC on Test Data")
_, _, _, test_auroc, test_auprc, _, _, _ = \
eval_model(model, device, test_loader, loss_func, print_detailed)
test_aurocs.append(test_auroc)
test_auprcs.append(test_auprc)
print()
return val_aurocs, val_auprcs, test_aurocs, test_auprcs
def plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, test_aurocs, test_auprcs, all_models, best_epoch):
print()
print(f'Plot AUROC/AUPRC for Each Intermediate Model')
print()
# Create x-axis values for epochs
epochs = range(0, len(val_aurocs))
# Find model with highest AUROC
np_test_aurocs = np.array(test_aurocs)
test_auroc_idx = np.argmax(np_test_aurocs)
#print(f'Epoch with best Validation Loss: {best_epoch:3}, {val_losses[best_epoch]:.4}')
#print(f'Epoch with best model Test AUROC: {test_auroc_idx:3}, {np.max(np_test_aurocs):.4}')
#print(f'Best Model on Validation Loss:')
#print(f' {all_models[test_auroc_idx]}')
#print(f'Best Model on Test AUROC:')
#print(f' {all_models[best_epoch]}')
plt.figure(figsize=(16, 9))
# Plot the training and validation losses
plt.plot(epochs, val_aurocs, 'C0', label='AUROC - Validation')
plt.plot(epochs, test_aurocs, 'C1', label='AUROC - Test')
plt.plot(epochs, val_auprcs, 'C2', label='AUPRC - Validation')
plt.plot(epochs, test_auprcs, 'C3', label='AUPRC - Test')
# Add a vertical bar at the best_epoch
plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch - Validation Loss')
plt.axvline(x=test_auroc_idx, color='maroon', linestyle='--', label='Best Epoch - Test AUROC')
# Shade everything to the right of the best_model a light red
plt.axvspan(test_auroc_idx, max(epochs), facecolor='r', alpha=0.1)
# Add labels and title
plt.xlabel('Epochs')
plt.ylabel('AUROC / AUPRC')
plt.title('Validation and Test AUROC by Model Iteration Across Training')
# Add legend
plt.legend(loc='right')
# Show the plot
plt.show()
return np_test_aurocs, test_auroc_idx
def run_experiment(
experimentNamePrefix: str = None,
useAbp: bool = True,
useEeg: bool = False,
useEcg: bool = False,
nResiduals: int = 12,
skip_connection: bool = False,
batch_size: int = 64,
learning_rate: float = 1e-4,
weight_decay: float = 0.0,
balance_labels: bool = False,
pos_weight: float = None,
max_epochs: int = 100,
patience: int = 15,
device: str = "cpu"
):
experimentName = ""
experimentOptions = [experimentNamePrefix, 'ABP', 'EEG', 'ECG', 'SKIPCONNECTION']
experimentValues = [experimentNamePrefix is not None, useAbp, useEeg, useEcg, skip_connection]
experimentFlags = [name for name, value in zip(experimentOptions, experimentValues) if value]
if experimentFlags:
experimentName = "_".join(experimentFlags)
experimentName = f"{experimentName}_{nResiduals}_RESIDUAL_BLOCKS_{batch_size}_BATCH_SIZE_{learning_rate}_LEARNING_RATE"
if weight_decay is not None and weight_decay != 0.0:
experimentName = f"{experimentName}_{weight_decay}_WEIGHT_DECAY"
predictionWindow = 'ALL' if PREDICTION_WINDOW == 'ALL' else f'{PREDICTION_WINDOW:03}'
experimentName = f"{experimentName}_{predictionWindow}_MINS"
maxCases = '_ALL' if MAX_CASES is None else f'{MAX_CASES:04}'
experimentName = f"{experimentName}_{maxCases}_MAX_CASES"
# default label split based on empirical data
my_pos_weight = 4.0
if balance_labels and pos_weight is not None:
my_pos_weight = pos_weight
print(f"Experiment Setup")
print(f' name: {experimentName}')
print(f' prediction_window: {predictionWindow}')
print(f' max_cases: {maxCases}')
print(f' use_abp: {useAbp}')
print(f' use_eeg: {useEeg}')
print(f' use_ecg: {useEcg}')
print(f' n_residuals: {nResiduals}')
print(f' skip_connection: {skip_connection}')
print(f' batch_size: {batch_size}')
print(f' learning_rate: {learning_rate}')
print(f' weight_decay: {weight_decay}')
print(f' balance_labels: {balance_labels}')
if balance_labels:
print(f' pos_weight: {my_pos_weight}')
print(f' max_epochs: {max_epochs}')
print(f' patience: {patience}')
print(f' device: {device}')
print()
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# Disable final sigmoid activation for BCEWithLogitsLoss
model = HypotensionCNN(useAbp, useEeg, useEcg, device, nResiduals, skip_connection, useSigmoid=(not balance_labels))
model = model.to(device)
if balance_labels:
# Only the weight for the positive class
loss_func = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([my_pos_weight]).to(device))
else:
loss_func = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)
print(f'Model Architecture')
print(model)
print()
print(f'Training Loop')
# Training loop
best_epoch = 0
train_losses = []
val_losses = []
best_loss = float('inf')
no_improve_epochs = 0
model_path = os.path.join(VITAL_MODELS, f"{experimentName}.model")
all_models = []
for i in range(max_epochs):
# Train the model and get the training loss
train_loss = train_model_one_iter(model, device, loss_func, optimizer, train_loader)
train_losses.append(train_loss)
# Calculate validate loss
val_loss = evaluate_model(model, loss_func, val_loader)
val_losses.append(val_loss)
print(f"[{datetime.now()}] Completed epoch {i} with training loss {train_loss:.8f}, validation loss {val_loss:.8f}")
# Save all intermediary models.
tmp_model_path = os.path.join(VITAL_MODELS, f"{experimentName}_{i:04d}.model")
torch.save(model.state_dict(), tmp_model_path)
all_models.append(tmp_model_path)
# Check if validation loss has improved
if val_loss < best_loss:
best_epoch = i
best_loss = val_loss
no_improve_epochs = 0
torch.save(model.state_dict(), model_path)
print(f"Validation loss improved to {val_loss:.8f}. Model saved.")
else:
no_improve_epochs += 1
print(f"No improvement in validation loss. {no_improve_epochs} epochs without improvement.")
# exit early if no improvement in loss over last 'patience' epochs
if no_improve_epochs >= patience:
print("Early stopping due to no improvement in validation loss.")
break
# Load best model from disk
#print()
#if os.path.exists(model_path):
# model.load_state_dict(torch.load(model_path))
# print(f"Loaded best model from disk from epoch {best_epoch}.")
#else:
# print("No saved model found for f{experimentName}.")
model.train(False)
# Plot the training and validation losses across all training epochs.
plot_losses(train_losses, val_losses, best_epoch, experimentName)
# Generate AUROC/AUPRC for each intermediate model generated across training epochs.
val_aurocs, val_auprcs, test_aurocs, test_auprcs = \
print_all_evals(model, all_models, device, val_loader, test_loader, loss_func, print_detailed=False)
# Find model with highest AUROC. Plot AUROC/AUPRC across all epochs.
np_test_aurocs, test_auroc_idx = plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, \
test_aurocs, test_auprcs, all_models, best_epoch)
## AUROC / AUPRC - Model with Best Validation Loss
best_model_val_loss = all_models[best_epoch]
print(f'AUROC/AUPRC Plots - Best Model Based on Validation Loss')
print(f'Epoch with best Validation Loss: {best_epoch:3}, {val_losses[best_epoch]:.4}')
print(f'Best Model Based on Validation Loss:')
print(f' {best_model_val_loss}')
print(f'Generate Stats Based on Test Data')
model.load_state_dict(torch.load(best_model_val_loss))
model.train(False)
best_model_val_test_predictions, best_model_val_test_labels, test_loss, \
best_model_val_test_auroc, best_model_val_test_auprc, test_sensitivity, test_specificity, \
best_model_val_test_threshold = eval_model(model, device, test_loader, loss_func, print_detailed=False)
# y_test, y_pred
display = RocCurveDisplay.from_predictions(
best_model_val_test_labels,
best_model_val_test_predictions,
plot_chance_level=True
)
plt.show()
print(f'best_model_val_test_auroc: {best_model_val_test_auroc}')
best_model_val_test_predictions_binary = \
(best_model_val_test_predictions > best_model_val_test_threshold).astype(int)
# y_test, y_pred
display = PrecisionRecallDisplay.from_predictions(
best_model_val_test_labels,
best_model_val_test_predictions_binary,
plot_chance_level=True
)
plt.show()
print(f'best_model_val_test_auprc: {best_model_val_test_auprc}')
print()
## AUROC / AUPRC - Model with Best AUROC
# Find model with highest AUROC
best_model_auroc = all_models[test_auroc_idx]
print(f'AUROC/AUPRC Plots - Best Model Based on Model AUROC')
print(f'Epoch with best model Test AUROC: {test_auroc_idx:3}, {np.max(np_test_aurocs):.4}')
print(f'Best Model Based on Model AUROC:')
print(f'{best_model_auroc}')
print(f'Generate Stats Based on Test Data')
model.load_state_dict(torch.load(best_model_auroc))
model.train(False)
best_model_auroc_test_predictions, best_model_auroc_test_labels, test_loss, \
best_model_auroc_test_auroc, best_model_auroc_test_auprc, test_sensitivity, test_specificity, \
best_model_auroc_test_threshold = eval_model(model, device, test_loader, loss_func, print_detailed=False)
# y_test, y_pred
display = RocCurveDisplay.from_predictions(
best_model_auroc_test_labels,
best_model_auroc_test_predictions,
plot_chance_level=True
)
plt.show()
print(f'best_model_auroc_test_auroc: {best_model_auroc_test_auroc}')
best_model_auroc_test_predictions_binary = \
(best_model_auroc_test_predictions > best_model_auroc_test_threshold).astype(int)
# y_test, y_pred
display = PrecisionRecallDisplay.from_predictions(
best_model_auroc_test_labels,
best_model_auroc_test_predictions_binary,
plot_chance_level=True
)
plt.show()
print(f"best_model_auroc_test_auprc: {best_model_auroc_test_auprc}")
print('Time to experiment!')
Time to experiment!
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 19:14:14.791333] Completed epoch 0 with training loss 0.40477479, validation loss 0.45554507 Validation loss improved to 0.45554507. Model saved.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 19:17:11.050422] Completed epoch 1 with training loss 0.38131657, validation loss 0.46791103 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 19:20:10.450683] Completed epoch 2 with training loss 0.37945434, validation loss 0.45835856 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 19:23:09.885155] Completed epoch 3 with training loss 0.37677917, validation loss 0.45109558 Validation loss improved to 0.45109558. Model saved.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 19:26:08.776039] Completed epoch 4 with training loss 0.37610382, validation loss 0.44325227 Validation loss improved to 0.44325227. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:23<00:00, 1.56it/s]
[2024-05-01 19:29:07.716218] Completed epoch 5 with training loss 0.37665263, validation loss 0.42917484 Validation loss improved to 0.42917484. Model saved.
100%|██████████| 212/212 [02:41<00:00, 1.32it/s] 100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
[2024-05-01 19:32:11.755371] Completed epoch 6 with training loss 0.37562460, validation loss 0.41526288 Validation loss improved to 0.41526288. Model saved.
100%|██████████| 212/212 [02:41<00:00, 1.31it/s] 100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
[2024-05-01 19:35:16.481772] Completed epoch 7 with training loss 0.37454793, validation loss 0.50944167 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:41<00:00, 1.32it/s] 100%|██████████| 36/36 [00:22<00:00, 1.57it/s]
[2024-05-01 19:38:20.488972] Completed epoch 8 with training loss 0.37332124, validation loss 0.47994232 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 19:41:20.563468] Completed epoch 9 with training loss 0.37266049, validation loss 0.43820751 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 19:44:17.993182] Completed epoch 10 with training loss 0.37194559, validation loss 0.43242472 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 19:47:15.100255] Completed epoch 11 with training loss 0.37221313, validation loss 0.43985155 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 19:50:11.346896] Completed epoch 12 with training loss 0.37117884, validation loss 0.42860523 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:23<00:00, 1.53it/s]
[2024-05-01 19:53:09.893089] Completed epoch 13 with training loss 0.37118828, validation loss 0.44210875 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 19:56:07.143430] Completed epoch 14 with training loss 0.37080163, validation loss 0.42627117 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 19:59:07.010224] Completed epoch 15 with training loss 0.36913958, validation loss 0.43209347 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 20:02:03.959828] Completed epoch 16 with training loss 0.36927640, validation loss 0.43849418 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 20:05:03.594702] Completed epoch 17 with training loss 0.36985189, validation loss 0.40969762 Validation loss improved to 0.40969762. Model saved.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 20:08:05.464578] Completed epoch 18 with training loss 0.37064341, validation loss 0.41983223 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 20:11:03.162163] Completed epoch 19 with training loss 0.36817861, validation loss 0.45309845 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 20:13:59.424666] Completed epoch 20 with training loss 0.36851323, validation loss 0.40629002 Validation loss improved to 0.40629002. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 20:16:56.679760] Completed epoch 21 with training loss 0.36769435, validation loss 0.44488302 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 20:19:52.176090] Completed epoch 22 with training loss 0.36673540, validation loss 0.41815782 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 20:22:47.908853] Completed epoch 23 with training loss 0.36804542, validation loss 0.40913418 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 20:25:43.070562] Completed epoch 24 with training loss 0.36795455, validation loss 0.40897727 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 20:28:37.157499] Completed epoch 25 with training loss 0.36613998, validation loss 0.42734683 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 20:31:29.957848] Completed epoch 26 with training loss 0.36645782, validation loss 0.41637802 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 20:34:23.129077] Completed epoch 27 with training loss 0.36572224, validation loss 0.43250138 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 20:37:16.916885] Completed epoch 28 with training loss 0.36661056, validation loss 0.41363439 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 20:40:10.634001] Completed epoch 29 with training loss 0.36529887, validation loss 0.41809785 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 20:43:03.802995] Completed epoch 30 with training loss 0.36576879, validation loss 0.45826805 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 20:45:56.312382] Completed epoch 31 with training loss 0.36628047, validation loss 0.40960923 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 20:48:52.986416] Completed epoch 32 with training loss 0.36507675, validation loss 0.45192024 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 20:51:53.025505] Completed epoch 33 with training loss 0.36503857, validation loss 0.44290003 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 20:54:44.757812] Completed epoch 34 with training loss 0.36345723, validation loss 0.41234040 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 20:57:37.430554] Completed epoch 35 with training loss 0.36366972, validation loss 0.41113794 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.78it/s]
Loss: 0.45792292555173236 AUROC: 0.7697676220798428 AUPRC: 0.4656817663254953 Sensitivity: 0.7222222222222222 Specificity: 0.679173463839043 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.64it/s]
Loss: 0.4451879174621017 AUROC: 0.7817963716288484 AUPRC: 0.4795326496455863 Sensitivity: 0.7439418416801292 Specificity: 0.6916400991852639 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.76it/s]
Loss: 0.4627179528276126 AUROC: 0.7718469986051021 AUPRC: 0.46966349293967613 Sensitivity: 0.717391304347826 Specificity: 0.6859706362153344 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.45342759415507317 AUROC: 0.7832194737206548 AUPRC: 0.4800183146323362 Sensitivity: 0.7366720516962844 Specificity: 0.7017357421183138 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.81it/s]
Loss: 0.45502276801400715 AUROC: 0.7721778271639965 AUPRC: 0.4690769900359576 Sensitivity: 0.7258454106280193 Specificity: 0.680804785209353 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:02<00:00, 1.74it/s]
Loss: 0.4480311477349864 AUROC: 0.7834192305645354 AUPRC: 0.47851312786958455 Sensitivity: 0.7419224555735057 Specificity: 0.6972192702798441 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.82it/s]
Loss: 0.4504409722155995 AUROC: 0.7743972517094725 AUPRC: 0.47278519801775215 Sensitivity: 0.7246376811594203 Specificity: 0.6884176182707994 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:03<00:00, 1.71it/s]
Loss: 0.43415100913908744 AUROC: 0.7852276469766863 AUPRC: 0.48122260615435464 Sensitivity: 0.7382875605815832 Specificity: 0.7078462628409493 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.83it/s]
Loss: 0.4407015773985121 AUROC: 0.7732295828703376 AUPRC: 0.46986724026960924 Sensitivity: 0.6944444444444444 Specificity: 0.7234910277324633 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.69it/s]
Loss: 0.423483947912852 AUROC: 0.7841732992376835 AUPRC: 0.4814662046175642 Sensitivity: 0.7403069466882067 Specificity: 0.7048352816153028 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.81it/s]
Loss: 0.42753511543075245 AUROC: 0.7746606010933268 AUPRC: 0.46947749134190375 Sensitivity: 0.7137681159420289 Specificity: 0.7052746057640021 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:02<00:00, 1.72it/s]
Loss: 0.4221240057161561 AUROC: 0.7847659171689738 AUPRC: 0.4800229650317317 Sensitivity: 0.7205169628432956 Specificity: 0.724406659582005 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.84it/s]
Loss: 0.4130461944474114 AUROC: 0.7746537054112059 AUPRC: 0.4680669396243268 Sensitivity: 0.7137681159420289 Specificity: 0.7050027188689505 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:01<00:00, 1.74it/s]
Loss: 0.40386394770057116 AUROC: 0.7847861968700445 AUPRC: 0.4816472663558401 Sensitivity: 0.7237479806138933 Specificity: 0.7221927027984414 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.87it/s]
Loss: 0.5036988701257441 AUROC: 0.7738598810527672 AUPRC: 0.469846665418512 Sensitivity: 0.6763285024154589 Specificity: 0.7381729200652528 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:02<00:00, 1.73it/s]
Loss: 0.4962478481509067 AUROC: 0.783320764926003 AUPRC: 0.4779624217548683 Sensitivity: 0.6926494345718901 Specificity: 0.7538080056677293 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.82it/s]
Loss: 0.4761828084786733 AUROC: 0.7744116998053447 AUPRC: 0.46830641927041405 Sensitivity: 0.7137681159420289 Specificity: 0.6976617727025557 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:02<00:00, 1.73it/s]
Loss: 0.4677942090288357 AUROC: 0.7832206540207172 AUPRC: 0.47395491739497786 Sensitivity: 0.7269789983844911 Specificity: 0.7144881331916401 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
Loss: 0.4352800034814411 AUROC: 0.7757241779690178 AUPRC: 0.4692893331468094 Sensitivity: 0.6968599033816425 Specificity: 0.7158781946710169 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.42057030868751033 AUROC: 0.7846223497613933 AUPRC: 0.47871381739141433 Sensitivity: 0.7116316639741519 Specificity: 0.7319341126461212 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.71it/s]
Loss: 0.4348217960860994 AUROC: 0.7763513566762024 AUPRC: 0.4677656500688148 Sensitivity: 0.6908212560386473 Specificity: 0.7264817835780315 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:03<00:00, 1.69it/s]
Loss: 0.42256747034412845 AUROC: 0.785427868787258 AUPRC: 0.4785359305814545 Sensitivity: 0.7374798061389337 Specificity: 0.7039496989018774 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4259205046627257 AUROC: 0.7754260218087439 AUPRC: 0.46635619187057253 Sensitivity: 0.6944444444444444 Specificity: 0.7177814029363785 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4169251685617147 AUROC: 0.784386772598955 AUPRC: 0.4756606382697415 Sensitivity: 0.7079967689822294 Specificity: 0.7362734679419057 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.4307514785064591 AUROC: 0.7758420612967035 AUPRC: 0.4659178868442055 Sensitivity: 0.6980676328502415 Specificity: 0.7172376291462752 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4172758123388997 AUROC: 0.7845641573916542 AUPRC: 0.4755443003184452 Sensitivity: 0.7071890145395799 Specificity: 0.7338823946156571 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.44107264031966525 AUROC: 0.7740820205268039 AUPRC: 0.46453798127188517 Sensitivity: 0.7210144927536232 Specificity: 0.6914083741163676 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.42803395691293256 AUROC: 0.7823889001434673 AUPRC: 0.4715527202481975 Sensitivity: 0.7298061389337641 Specificity: 0.7067835635848388 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4280819727314843 AUROC: 0.775338019770249 AUPRC: 0.4641062472501879 Sensitivity: 0.716183574879227 Specificity: 0.6960304513322458 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.42028172217585424 AUROC: 0.783492981435096 AUPRC: 0.47150151957109154 Sensitivity: 0.7241518578352181 Specificity: 0.7157279489904357 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4320286098453734 AUROC: 0.7763874769158832 AUPRC: 0.46501406183979693 Sensitivity: 0.711352657004831 Specificity: 0.7090810222947254 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.4221715248293347 AUROC: 0.783757779965744 AUPRC: 0.47050273991060326 Sensitivity: 0.7152665589660743 Specificity: 0.7258235919234857 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.44356507973538506 AUROC: 0.7775001444809586 AUPRC: 0.46634819621611545 Sensitivity: 0.7234299516908212 Specificity: 0.7001087547580207 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.428002645847974 AUROC: 0.7848030787376026 AUPRC: 0.4722481676660959 Sensitivity: 0.7302100161550888 Specificity: 0.7160821820758059 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.40958817799886066 AUROC: 0.7769534153985177 AUPRC: 0.46758344214512826 Sensitivity: 0.7101449275362319 Specificity: 0.7066340402392605 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.39996434803362246 AUROC: 0.7851263915380068 AUPRC: 0.4742998644622104 Sensitivity: 0.718497576736672 Specificity: 0.7250265674814028 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4296761403481166 AUROC: 0.776586794965758 AUPRC: 0.4673389147231002 Sensitivity: 0.7077294685990339 Specificity: 0.7098966829798804 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4149241775826172 AUROC: 0.7840964545502928 AUPRC: 0.47204793359708064 Sensitivity: 0.7168820678513732 Specificity: 0.7272405242649663 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
Loss: 0.4460637809501754 AUROC: 0.7770947768819959 AUPRC: 0.46841306923051057 Sensitivity: 0.714975845410628 Specificity: 0.7071778140293637 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.43677076104062573 AUROC: 0.7844410127518187 AUPRC: 0.47291479941460224 Sensitivity: 0.7197092084006462 Specificity: 0.7230782855118668 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4037952067123519 AUROC: 0.7773111699542652 AUPRC: 0.46669573854841134 Sensitivity: 0.7077294685990339 Specificity: 0.7069059271343121 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:03<00:00, 1.69it/s]
Loss: 0.39551276699812327 AUROC: 0.7843495037303206 AUPRC: 0.47215299634923835 Sensitivity: 0.7164781906300485 Specificity: 0.7278604321643641 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
Loss: 0.4505802061822679 AUROC: 0.7760991717300676 AUPRC: 0.46977764622384127 Sensitivity: 0.7041062801932367 Specificity: 0.7191408374116368 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.4331422254995064 AUROC: 0.7820075916900008 AUPRC: 0.469185742955018 Sensitivity: 0.7059773828756059 Specificity: 0.7341480694296847 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.40695787676506573 AUROC: 0.779336530302911 AUPRC: 0.4688292705741159 Sensitivity: 0.7004830917874396 Specificity: 0.7300163132137031 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:03<00:00, 1.69it/s]
Loss: 0.40398290215267074 AUROC: 0.7854566967221136 AUPRC: 0.47082037772991087 Sensitivity: 0.7330371567043619 Specificity: 0.7089089620970599 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
Loss: 0.41574012488126755 AUROC: 0.7780260223341292 AUPRC: 0.4703319843814589 Sensitivity: 0.7016908212560387 Specificity: 0.7213159325720501 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.40460764464956744 AUROC: 0.7844046559332325 AUPRC: 0.4714638747148934 Sensitivity: 0.7092084006462036 Specificity: 0.733705278072972 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4076421608527501 AUROC: 0.7775341303428401 AUPRC: 0.46779469746943136 Sensitivity: 0.7101449275362319 Specificity: 0.7112561174551386 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.39688586098728357 AUROC: 0.7834399394656288 AUPRC: 0.46802364348991765 Sensitivity: 0.7144588045234249 Specificity: 0.7270634077222813 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4347608685493469 AUROC: 0.7779035418850299 AUPRC: 0.46883430438996465 Sensitivity: 0.6968599033816425 Specificity: 0.7330070690592714 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.4205324175181212 AUROC: 0.7834348605986939 AUPRC: 0.4670089225219152 Sensitivity: 0.7003231017770598 Specificity: 0.7391073326248672 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0026.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.41247202538781697 AUROC: 0.7787021275477903 AUPRC: 0.47003592353278134 Sensitivity: 0.7028985507246377 Specificity: 0.7229472539423599 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.39995462185254804 AUROC: 0.7846404834623508 AUPRC: 0.46988959130346 Sensitivity: 0.7084006462035541 Specificity: 0.73467941905774 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0027.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4294244945049286 AUROC: 0.7775768178988267 AUPRC: 0.4700060159455016 Sensitivity: 0.7210144927536232 Specificity: 0.7069059271343121 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.41968415291221056 AUROC: 0.7834430153991245 AUPRC: 0.46772943995196153 Sensitivity: 0.721324717285945 Specificity: 0.7160821820758059 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0028.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4168602352341016 AUROC: 0.7802093266399246 AUPRC: 0.46863726279172657 Sensitivity: 0.7065217391304348 Specificity: 0.7226753670473083 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.40585450234788434 AUROC: 0.7856588320494529 AUPRC: 0.4683856510859953 Sensitivity: 0.710016155088853 Specificity: 0.7314027630180658 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0029.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4138452303078439 AUROC: 0.7794368460594789 AUPRC: 0.4708582527449939 Sensitivity: 0.7137681159420289 Specificity: 0.7207721587819467 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.408573347799204 AUROC: 0.7846326684452716 AUPRC: 0.4687989288061117 Sensitivity: 0.7096122778675282 Specificity: 0.7301629472192703 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0030.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4662391468882561 AUROC: 0.7789554617742787 AUPRC: 0.4717047980095863 Sensitivity: 0.7210144927536232 Specificity: 0.7082653616095704 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.45223350588370254 AUROC: 0.783043144044678 AUPRC: 0.466021890957507 Sensitivity: 0.7164781906300485 Specificity: 0.7166135317038611 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0031.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.51it/s]
Loss: 0.40339519704381627 AUROC: 0.7812998295124687 AUPRC: 0.4737537773910082 Sensitivity: 0.716183574879227 Specificity: 0.7123436650353453 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.3986843053113531 AUROC: 0.7858174572244951 AUPRC: 0.47019627238786527 Sensitivity: 0.720113085621971 Specificity: 0.7237867516826072 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0032.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.4534476184182697 AUROC: 0.7798195564171875 AUPRC: 0.4716109733918209 Sensitivity: 0.7125603864734299 Specificity: 0.7292006525285482 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.44584112366040546 AUROC: 0.7827581015796278 AUPRC: 0.46304607082847715 Sensitivity: 0.7023424878836834 Specificity: 0.7379560750974141 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0033.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:24<00:00, 1.49it/s]
Loss: 0.4358462020754814 AUROC: 0.7798057650529457 AUPRC: 0.47473423069111326 Sensitivity: 0.7246376811594203 Specificity: 0.7009244154431756 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.4292645141206406 AUROC: 0.7824387767627675 AUPRC: 0.46411120673719014 Sensitivity: 0.7265751211631664 Specificity: 0.7087318455543747 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0034.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.41035479224390453 AUROC: 0.780495004899218 AUPRC: 0.47257729100310814 Sensitivity: 0.7041062801932367 Specificity: 0.7338227297444263 Threshold: 0.16 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.404435520508775 AUROC: 0.7828918152700212 AUPRC: 0.4618514995369821 Sensitivity: 0.7334410339256866 Specificity: 0.7023556500177116 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0035.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.4212111251221763 AUROC: 0.781570074578444 AUPRC: 0.47543283116555474 Sensitivity: 0.717391304347826 Specificity: 0.7210440456769984 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.40427600813132747 AUROC: 0.7846528050796681 AUPRC: 0.4654588418934305 Sensitivity: 0.7132471728594507 Specificity: 0.7277718738930216 Threshold: 0.14 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 20, 0.4063 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0020.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.39551276699812327 AUROC: 0.7843495037303206 AUPRC: 0.47215299634923835 Sensitivity: 0.7164781906300485 Specificity: 0.7278604321643641 Threshold: 0.18
best_model_val_test_auroc: 0.7843495037303206
best_model_val_test_auprc: 0.47215299634923835 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 31, 0.7858 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0031.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.3986843053113531 AUROC: 0.7858174572244951 AUPRC: 0.47019627238786527 Sensitivity: 0.720113085621971 Specificity: 0.7237867516826072 Threshold: 0.15
best_model_auroc_test_auroc: 0.7858174572244951
best_model_auroc_test_auprc: 0.47019627238786527
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=True,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: True
pos_weight: 4.0
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-04-30 22:24:30.932544] Completed epoch 0 with training loss 0.89325160, validation loss 1.10563266 Validation loss improved to 1.10563266. Model saved.
100%|██████████| 212/212 [02:37<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-04-30 22:27:31.140293] Completed epoch 1 with training loss 0.84850127, validation loss 1.18715870 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-04-30 22:30:29.178303] Completed epoch 2 with training loss 0.84483790, validation loss 1.26821733 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-04-30 22:33:26.603585] Completed epoch 3 with training loss 0.83757305, validation loss 0.97054505 Validation loss improved to 0.97054505. Model saved.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-04-30 22:36:23.234127] Completed epoch 4 with training loss 0.83575928, validation loss 1.12168992 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:40<00:00, 1.32it/s] 100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
[2024-04-30 22:39:26.589147] Completed epoch 5 with training loss 0.83405274, validation loss 0.96441174 Validation loss improved to 0.96441174. Model saved.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-04-30 22:42:27.926371] Completed epoch 6 with training loss 0.83284408, validation loss 1.01532876 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-04-30 22:45:27.613905] Completed epoch 7 with training loss 0.82606781, validation loss 1.05095792 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-04-30 22:48:28.765078] Completed epoch 8 with training loss 0.82820880, validation loss 0.95587742 Validation loss improved to 0.95587742. Model saved.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-04-30 22:51:28.017039] Completed epoch 9 with training loss 0.82731426, validation loss 1.08990884 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-04-30 22:54:28.634054] Completed epoch 10 with training loss 0.82335764, validation loss 1.00911760 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-04-30 22:57:26.396205] Completed epoch 11 with training loss 0.82138985, validation loss 0.98870349 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:00:19.184093] Completed epoch 12 with training loss 0.81883723, validation loss 1.03841245 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:03:12.252687] Completed epoch 13 with training loss 0.81714123, validation loss 1.03062737 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:06:05.448845] Completed epoch 14 with training loss 0.81623781, validation loss 1.06383657 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:08:58.901212] Completed epoch 15 with training loss 0.81594247, validation loss 1.06054223 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:11:52.239323] Completed epoch 16 with training loss 0.81236297, validation loss 0.99106967 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:14:45.668285] Completed epoch 17 with training loss 0.81036752, validation loss 1.02456844 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:17:39.271286] Completed epoch 18 with training loss 0.80962402, validation loss 1.06885386 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-04-30 23:20:32.012681] Completed epoch 19 with training loss 0.80593342, validation loss 1.02084100 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-04-30 23:23:23.895615] Completed epoch 20 with training loss 0.80868673, validation loss 1.00137198 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:26:17.410427] Completed epoch 21 with training loss 0.80614585, validation loss 0.99816036 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:29:10.721827] Completed epoch 22 with training loss 0.80689758, validation loss 0.99274200 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-04-30 23:32:03.901117] Completed epoch 23 with training loss 0.80639732, validation loss 1.02417898 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.83it/s]
Loss: 1.1248068925407197 AUROC: 0.7794675482632075 AUPRC: 0.4697658571261917 Sensitivity: 0.014492753623188406 Specificity: 0.9991843393148451 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 1.0850834222855392 AUROC: 0.788502532566267 AUPRC: 0.4839371208062614 Sensitivity: 0.012116316639741519 Specificity: 0.999025859015232 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 1.1775537265671625 AUROC: 0.7820875791033248 AUPRC: 0.47838764651161997 Sensitivity: 0.006038647342995169 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 1.1420025693045721 AUROC: 0.7914006699526219 AUPRC: 0.49225847370885256 Sensitivity: 0.0020193861066235864 Specificity: 0.9999114417286574 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 1.2723263435893588 AUROC: 0.7800329941971194 AUPRC: 0.4815223477977743 Sensitivity: 0.007246376811594203 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 1.2164162275967774 AUROC: 0.7899050330569857 AUPRC: 0.49287491176493914 Sensitivity: 0.0016155088852988692 Specificity: 0.9999114417286574 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.9701710508929359 AUROC: 0.7809813146716473 AUPRC: 0.4786348455527253 Sensitivity: 0.2971014492753623 Specificity: 0.9543230016313213 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.9496706470295235 AUROC: 0.7893326590600978 AUPRC: 0.4889352232800339 Sensitivity: 0.2992730210016155 Specificity: 0.9591746369110875 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.12815640701188 AUROC: 0.780829609664988 AUPRC: 0.4807129686957357 Sensitivity: 0.03260869565217391 Specificity: 0.9989124524197933 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 1.0895588938836698 AUROC: 0.7897265037308927 AUPRC: 0.4901590161953192 Sensitivity: 0.0327140549273021 Specificity: 0.9982288345731491 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.9595538609557681 AUROC: 0.7811356466048288 AUPRC: 0.47636923467203934 Sensitivity: 0.32971014492753625 Specificity: 0.9475258292550299 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.9447187858599203 AUROC: 0.789205687386727 AUPRC: 0.48729351054823883 Sensitivity: 0.33077544426494343 Specificity: 0.9484590860786397 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0145202494329877 AUROC: 0.7812522164692531 AUPRC: 0.4773414610816576 Sensitivity: 0.15821256038647344 Specificity: 0.980152256661229 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.9887405153777864 AUROC: 0.7889345760390789 AUPRC: 0.4873483989989588 Sensitivity: 0.16074313408723748 Specificity: 0.9829082536308891 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0505692677365408 AUROC: 0.7814180412059695 AUPRC: 0.47682310139858297 Sensitivity: 0.06763285024154589 Specificity: 0.9923871669385536 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 1.0257906720594123 AUROC: 0.7898246117027395 AUPRC: 0.48689506716816316 Sensitivity: 0.07108239095315025 Specificity: 0.9936238044633369 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.9617106997304492 AUROC: 0.7811963942806556 AUPRC: 0.47075538644300624 Sensitivity: 0.3756038647342995 Specificity: 0.9350190320826536 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.940979007769514 AUROC: 0.7886412357069239 AUPRC: 0.484084577297698 Sensitivity: 0.3675282714054927 Specificity: 0.9344668792065179 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 1.0718994918796751 AUROC: 0.7820432497182622 AUPRC: 0.47358433019837454 Sensitivity: 0.05676328502415459 Specificity: 0.9945622620989668 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 1.0474834392468135 AUROC: 0.7899422125089488 AUPRC: 0.48597399046720613 Sensitivity: 0.05735056542810985 Specificity: 0.9956606447042153 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 1.014600267012914 AUROC: 0.7799331709892743 AUPRC: 0.46966925465478143 Sensitivity: 0.18357487922705315 Specificity: 0.9730831973898858 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.9978926617790151 AUROC: 0.7878564792321553 AUPRC: 0.48342547216764165 Sensitivity: 0.18295638126009692 Specificity: 0.9803400637619554 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 0.9870766136381361 AUROC: 0.7819115750263349 AUPRC: 0.46649502559973965 Sensitivity: 0.2246376811594203 Specificity: 0.9660141381185426 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.9689687976130733 AUROC: 0.7885137633001933 AUPRC: 0.48136272629342686 Sensitivity: 0.23546042003231019 Specificity: 0.9717499114417286 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0314302328560088 AUROC: 0.7817805570660384 AUPRC: 0.4661722897752808 Sensitivity: 0.08333333333333333 Specificity: 0.9883088635127787 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 1.0047015249729156 AUROC: 0.7884624738974854 AUPRC: 0.47866982608706204 Sensitivity: 0.09087237479806139 Specificity: 0.9901700318809776 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.036820309029685 AUROC: 0.7825922773614099 AUPRC: 0.46441211395167137 Sensitivity: 0.05676328502415459 Specificity: 0.9923871669385536 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 1.0120510795602091 AUROC: 0.788411291794783 AUPRC: 0.47428236078805736 Sensitivity: 0.05977382875605816 Specificity: 0.9934466879206518 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0537436753511429 AUROC: 0.7822268062090035 AUPRC: 0.46320138209283346 Sensitivity: 0.05434782608695652 Specificity: 0.9932028276237085 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 1.0376180736003098 AUROC: 0.7878272220972774 AUPRC: 0.474954718748013 Sensitivity: 0.05896607431340872 Specificity: 0.9934466879206518 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 1.0477758116192288 AUROC: 0.7828352680647169 AUPRC: 0.46489587704230195 Sensitivity: 0.051932367149758456 Specificity: 0.9932028276237085 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 1.0264391170607672 AUROC: 0.7884327875625845 AUPRC: 0.4732857884127376 Sensitivity: 0.05492730210016155 Specificity: 0.9939780375487071 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 1.0134728617138333 AUROC: 0.781415085913632 AUPRC: 0.46369487883328614 Sensitivity: 0.2584541062801932 Specificity: 0.9592169657422512 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.966169481476148 AUROC: 0.7857207441527221 AUPRC: 0.46874521785489676 Sensitivity: 0.2455573505654281 Specificity: 0.9651965993623804 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0363056510686874 AUROC: 0.7828227901637363 AUPRC: 0.4628001273104755 Sensitivity: 0.10990338164251208 Specificity: 0.9847743338771071 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 1.0032565411594179 AUROC: 0.7865553593634562 AUPRC: 0.46531336092567976 Sensitivity: 0.10339256865912763 Specificity: 0.9872476089266737 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 1.063853331738048 AUROC: 0.7814019512810207 AUPRC: 0.46216919657061195 Sensitivity: 0.06521739130434782 Specificity: 0.9910277324632952 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 1.0374972276665546 AUROC: 0.7853396145325983 AUPRC: 0.462800451287584 Sensitivity: 0.07189014539579967 Specificity: 0.9924725469358838 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 1.0259893784920375 AUROC: 0.7801032644815893 AUPRC: 0.4589502435545978 Sensitivity: 0.13285024154589373 Specificity: 0.9798803697661773 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.9962506415667357 AUROC: 0.7836407871929002 AUPRC: 0.45970447753683136 Sensitivity: 0.1284329563812601 Specificity: 0.9840595111583422 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.005423323975669 AUROC: 0.7800280687098902 AUPRC: 0.4590669169733512 Sensitivity: 0.1678743961352657 Specificity: 0.9736269711799891 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.985735245876842 AUROC: 0.7836958499791408 AUPRC: 0.4557527085264194 Sensitivity: 0.1567043618739903 Specificity: 0.9782146652497343 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.9935344556967417 AUROC: 0.7799610820835731 AUPRC: 0.4600034503494067 Sensitivity: 0.27294685990338163 Specificity: 0.9581294181620446 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.9764557988555344 AUROC: 0.7836388736761325 AUPRC: 0.46021983025484603 Sensitivity: 0.2544426494345719 Specificity: 0.9619199433227064 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 1.0076036204894383 AUROC: 0.777971841974608 AUPRC: 0.4556059856144027 Sensitivity: 0.213768115942029 Specificity: 0.9676454594888526 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.9784764476396419 AUROC: 0.7808168835271315 AUPRC: 0.45180597307747766 Sensitivity: 0.20234248788368336 Specificity: 0.9706872121856182 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 1.0095796038707097 AUROC: 0.7790505236778023 AUPRC: 0.4518615774967049 Sensitivity: 0.12439613526570048 Specificity: 0.9804241435562806 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 1.0010059164078147 AUROC: 0.7803662056200023 AUPRC: 0.445835590885484 Sensitivity: 0.12075928917609047 Specificity: 0.9840595111583422 Threshold: 0.0 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 8, 0.9559 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.940979007769514 AUROC: 0.7886412357069239 AUPRC: 0.484084577297698 Sensitivity: 0.3675282714054927 Specificity: 0.9344668792065179 Threshold: 0.0
best_model_val_test_auroc: 0.7886412357069239
best_model_val_test_auprc: 0.484084577297698 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 1, 0.7914 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.69it/s]
Loss: 1.1420025693045721 AUROC: 0.7914006699526219 AUPRC: 0.49225847370885256 Sensitivity: 0.0020193861066235864 Specificity: 0.9999114417286574 Threshold: 0.0
best_model_auroc_test_auroc: 0.7914006699526219
best_model_auroc_test_auprc: 0.49225847370885256
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=True,
pos_weight=2.0,
#pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: True
pos_weight: 2.0
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:12:26.955329] Completed epoch 0 with training loss 0.63455516, validation loss 0.61734772 Validation loss improved to 0.61734772. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 00:15:21.032810] Completed epoch 1 with training loss 0.57758701, validation loss 0.61154246 Validation loss improved to 0.61154246. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 00:18:14.917041] Completed epoch 2 with training loss 0.57424092, validation loss 0.59376109 Validation loss improved to 0.59376109. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:21:08.299915] Completed epoch 3 with training loss 0.57415223, validation loss 0.59877604 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 00:24:00.236640] Completed epoch 4 with training loss 0.57049727, validation loss 0.60131598 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 00:26:52.148627] Completed epoch 5 with training loss 0.56833243, validation loss 0.69151062 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 00:29:44.773215] Completed epoch 6 with training loss 0.57073259, validation loss 0.59741580 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:32:38.102128] Completed epoch 7 with training loss 0.56626230, validation loss 0.66222119 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 00:35:31.311796] Completed epoch 8 with training loss 0.56662333, validation loss 0.61070532 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:38:24.666730] Completed epoch 9 with training loss 0.56467009, validation loss 0.64684361 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 00:41:17.505750] Completed epoch 10 with training loss 0.56457216, validation loss 0.60841727 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 00:44:09.285117] Completed epoch 11 with training loss 0.56232202, validation loss 0.61129886 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 00:47:02.071066] Completed epoch 12 with training loss 0.56344950, validation loss 0.62580132 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 00:49:57.489811] Completed epoch 13 with training loss 0.56135768, validation loss 0.61743510 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 00:52:50.642029] Completed epoch 14 with training loss 0.56238115, validation loss 0.62852275 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:55:44.001043] Completed epoch 15 with training loss 0.56168562, validation loss 0.62825137 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 00:58:36.960640] Completed epoch 16 with training loss 0.56102157, validation loss 0.68298972 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 01:01:29.235803] Completed epoch 17 with training loss 0.55979985, validation loss 0.68449992 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.75it/s]
Loss: 0.6141375162535243 AUROC: 0.7750858348241142 AUPRC: 0.47191626415366417 Sensitivity: 0.5628019323671497 Specificity: 0.8322457857531267 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.593317080978994 AUROC: 0.7858522045429964 AUPRC: 0.4831949558169918 Sensitivity: 0.5674474959612278 Specificity: 0.849893730074389 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6052764993574884 AUROC: 0.7791234208887944 AUPRC: 0.4747057543076692 Sensitivity: 0.43719806763285024 Specificity: 0.9018488308863513 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.5849417815053904 AUROC: 0.7893519194111147 AUPRC: 0.4851191578991775 Sensitivity: 0.4297253634894992 Specificity: 0.9097591215019483 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.5939603216118283 AUROC: 0.7807560557223654 AUPRC: 0.4737647005542849 Sensitivity: 0.463768115942029 Specificity: 0.8920609026644916 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.5821293793894626 AUROC: 0.790591395426559 AUPRC: 0.4849788406700981 Sensitivity: 0.4507269789983845 Specificity: 0.9024087849805171 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.5976782358354993 AUROC: 0.7819936664801549 AUPRC: 0.4707275075536411 Sensitivity: 0.46859903381642515 Specificity: 0.8912452419793366 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.581321574471615 AUROC: 0.7916278598312844 AUPRC: 0.48469646661173993 Sensitivity: 0.4620355411954766 Specificity: 0.8985122210414452 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6044200956821442 AUROC: 0.7819490087292769 AUPRC: 0.4713512221413305 Sensitivity: 0.3804347826086957 Specificity: 0.9287656334964655 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.5881039467122819 AUROC: 0.7907435289512583 AUPRC: 0.4831492768430985 Sensitivity: 0.37075928917609047 Specificity: 0.9316330145235565 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6790702823135588 AUROC: 0.784324735402826 AUPRC: 0.4723655168857357 Sensitivity: 0.12198067632850242 Specificity: 0.9858618814573138 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6686570048332214 AUROC: 0.7914838274570127 AUPRC: 0.4808194244980458 Sensitivity: 0.1191437802907916 Specificity: 0.9874247254693589 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6033894783920712 AUROC: 0.7825587840482513 AUPRC: 0.468051692530877 Sensitivity: 0.4758454106280193 Specificity: 0.8863512778684067 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5869709024274791 AUROC: 0.7896456710599581 AUPRC: 0.4809745179348109 Sensitivity: 0.4745557350565428 Specificity: 0.8938186326602905 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6628352900346121 AUROC: 0.7831199612265646 AUPRC: 0.46932487398944794 Sensitivity: 0.12681159420289856 Specificity: 0.9845024469820555 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.6404953982543062 AUROC: 0.788916031021433 AUPRC: 0.4779615941363267 Sensitivity: 0.1264135702746365 Specificity: 0.9866277010272759 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6123126786616113 AUROC: 0.784095864429576 AUPRC: 0.4675732021689626 Sensitivity: 0.24879227053140096 Specificity: 0.9603045133224578 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6026637137488082 AUROC: 0.7901136063846652 AUPRC: 0.47974071149781733 Sensitivity: 0.25565428109854604 Specificity: 0.9636911087495572 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6358648464083672 AUROC: 0.783576553892711 AUPRC: 0.4684500925405203 Sensitivity: 0.15217391304347827 Specificity: 0.9812398042414355 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6282149820967957 AUROC: 0.7881064703620215 AUPRC: 0.4774179839359085 Sensitivity: 0.15064620355411953 Specificity: 0.9822883457314914 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6103861820366647 AUROC: 0.7834072813149344 AUPRC: 0.46832318037115656 Sensitivity: 0.31521739130434784 Specificity: 0.9437194127243067 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.5954986178764591 AUROC: 0.7886673811416378 AUPRC: 0.4784449335646194 Sensitivity: 0.31623586429725364 Specificity: 0.9478391781792419 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6092403671807713 AUROC: 0.7839474430810696 AUPRC: 0.46641966957754005 Sensitivity: 0.2717391304347826 Specificity: 0.9521479064709081 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.599574562576082 AUROC: 0.7884210918619668 AUPRC: 0.4769779247596281 Sensitivity: 0.2823101777059774 Specificity: 0.9567835635848388 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6255692086286015 AUROC: 0.7815828808452397 AUPRC: 0.4664554004204975 Sensitivity: 0.18719806763285024 Specificity: 0.9738988580750407 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.6147019548548592 AUROC: 0.7858563534765488 AUPRC: 0.4740342072553484 Sensitivity: 0.18739903069466882 Specificity: 0.9753808005667729 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6192345834440656 AUROC: 0.7838597694083899 AUPRC: 0.4658433400671375 Sensitivity: 0.22826086956521738 Specificity: 0.966557911908646 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6087012906317357 AUROC: 0.7878990773344047 AUPRC: 0.47683019252297837 Sensitivity: 0.21890145395799676 Specificity: 0.9687389302160822 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6079601157042716 AUROC: 0.7840778043097355 AUPRC: 0.46636901655418805 Sensitivity: 0.25483091787439616 Specificity: 0.958673191952148 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6027458980679512 AUROC: 0.7876680961888756 AUPRC: 0.4761832256560238 Sensitivity: 0.2552504038772213 Specificity: 0.9629826425788168 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6369260748227438 AUROC: 0.7822094028207937 AUPRC: 0.4639124101624836 Sensitivity: 0.14855072463768115 Specificity: 0.980152256661229 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.6229145565518627 AUROC: 0.7852991981971311 AUPRC: 0.47294979181782404 Sensitivity: 0.15751211631663975 Specificity: 0.9813142047467234 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6861116604672538 AUROC: 0.7766005863299997 AUPRC: 0.4631759650488733 Sensitivity: 0.043478260869565216 Specificity: 0.9940184883088635 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.6737135016807804 AUROC: 0.779930978913689 AUPRC: 0.4682409555726927 Sensitivity: 0.05088852988691438 Specificity: 0.9965462274176408 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.6933707727326287 AUROC: 0.777991543923525 AUPRC: 0.4590102499211735 Sensitivity: 0.024154589371980676 Specificity: 0.9961935834692768 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.6804233430712311 AUROC: 0.7800572173703544 AUPRC: 0.4646159182720818 Sensitivity: 0.034329563812600966 Specificity: 0.9972546935883811 Threshold: 0.0 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 2, 0.5938 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.5821293793894626 AUROC: 0.790591395426559 AUPRC: 0.4849788406700981 Sensitivity: 0.4507269789983845 Specificity: 0.9024087849805171 Threshold: 0.0
best_model_val_test_auroc: 0.790591395426559
best_model_val_test_auprc: 0.4849788406700981 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 3, 0.7916 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.581321574471615 AUROC: 0.7916278598312844 AUPRC: 0.48469646661173993 Sensitivity: 0.4620355411954766 Specificity: 0.8985122210414452 Threshold: 0.0
best_model_auroc_test_auroc: 0.7916278598312844
best_model_auroc_test_auprc: 0.48469646661173993
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=1e-2,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.01
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:33:03.070630] Completed epoch 0 with training loss 0.44521251, validation loss 0.49839368 Validation loss improved to 0.49839368. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:35:56.397513] Completed epoch 1 with training loss 0.38272783, validation loss 0.48891824 Validation loss improved to 0.48891824. Model saved.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 01:38:48.399331] Completed epoch 2 with training loss 0.38057312, validation loss 0.47110870 Validation loss improved to 0.47110870. Model saved.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:41:40.174618] Completed epoch 3 with training loss 0.37937200, validation loss 0.46025854 Validation loss improved to 0.46025854. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:44:33.238022] Completed epoch 4 with training loss 0.37933752, validation loss 0.46212849 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 01:47:28.473797] Completed epoch 5 with training loss 0.37789315, validation loss 0.48179421 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 01:50:21.888257] Completed epoch 6 with training loss 0.37795204, validation loss 0.48513454 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 01:53:15.353887] Completed epoch 7 with training loss 0.37642172, validation loss 0.48209566 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:56:08.386601] Completed epoch 8 with training loss 0.37740329, validation loss 0.44699204 Validation loss improved to 0.44699204. Model saved.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 01:59:00.507781] Completed epoch 9 with training loss 0.37685055, validation loss 0.48513842 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:01:54.121108] Completed epoch 10 with training loss 0.37575668, validation loss 0.45883751 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:04:47.702648] Completed epoch 11 with training loss 0.37549788, validation loss 0.45957458 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:07:41.094294] Completed epoch 12 with training loss 0.37554666, validation loss 0.47708163 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:10:34.497920] Completed epoch 13 with training loss 0.37502605, validation loss 0.47178471 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:13:28.171644] Completed epoch 14 with training loss 0.37462881, validation loss 0.50017136 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 02:16:20.554919] Completed epoch 15 with training loss 0.37417990, validation loss 0.50603783 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:19:14.746020] Completed epoch 16 with training loss 0.37469330, validation loss 0.48925897 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:22:08.272591] Completed epoch 17 with training loss 0.37410644, validation loss 0.43439901 Validation loss improved to 0.43439901. Model saved.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 02:25:03.594807] Completed epoch 18 with training loss 0.37436587, validation loss 0.45607293 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:27:57.010851] Completed epoch 19 with training loss 0.37333196, validation loss 0.45753363 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:30:50.409217] Completed epoch 20 with training loss 0.37529004, validation loss 0.48466089 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 02:33:42.936017] Completed epoch 21 with training loss 0.37379211, validation loss 0.45899093 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 02:36:35.263922] Completed epoch 22 with training loss 0.37364444, validation loss 0.43718660 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:39:29.218308] Completed epoch 23 with training loss 0.37334132, validation loss 0.47539452 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:42:22.980690] Completed epoch 24 with training loss 0.37337837, validation loss 0.47435740 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 02:45:16.535889] Completed epoch 25 with training loss 0.37226969, validation loss 0.47283024 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:48:10.149599] Completed epoch 26 with training loss 0.37340701, validation loss 0.43742660 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 02:51:03.164931] Completed epoch 27 with training loss 0.37312987, validation loss 0.46870947 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:53:55.620100] Completed epoch 28 with training loss 0.37382945, validation loss 0.42305431 Validation loss improved to 0.42305431. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:56:48.895969] Completed epoch 29 with training loss 0.37314206, validation loss 0.43553197 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 02:59:42.511444] Completed epoch 30 with training loss 0.37383750, validation loss 0.45883596 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:02:36.215628] Completed epoch 31 with training loss 0.37200454, validation loss 0.47171226 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 03:05:29.884553] Completed epoch 32 with training loss 0.37397513, validation loss 0.44976023 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:08:23.638019] Completed epoch 33 with training loss 0.37261176, validation loss 0.42933112 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:11:16.119037] Completed epoch 34 with training loss 0.37237361, validation loss 0.45257938 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:14:08.237047] Completed epoch 35 with training loss 0.37289235, validation loss 0.43593308 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 03:17:01.517364] Completed epoch 36 with training loss 0.37225321, validation loss 0.42398837 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 03:19:55.762306] Completed epoch 37 with training loss 0.37287480, validation loss 0.45812988 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:22:49.192255] Completed epoch 38 with training loss 0.37284377, validation loss 0.43487030 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:25:42.862246] Completed epoch 39 with training loss 0.37370667, validation loss 0.42804447 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 03:28:36.223585] Completed epoch 40 with training loss 0.37314412, validation loss 0.42855012 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 03:31:28.295955] Completed epoch 41 with training loss 0.37267527, validation loss 0.43495092 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 03:34:21.606227] Completed epoch 42 with training loss 0.37276167, validation loss 0.46286428 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:37:14.994772] Completed epoch 43 with training loss 0.37239268, validation loss 0.41731331 Validation loss improved to 0.41731331. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:40:08.749823] Completed epoch 44 with training loss 0.37233889, validation loss 0.44871724 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:43:04.145305] Completed epoch 45 with training loss 0.37169841, validation loss 0.48714781 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 03:45:57.767726] Completed epoch 46 with training loss 0.37220934, validation loss 0.41016650 Validation loss improved to 0.41016650. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:48:50.720365] Completed epoch 47 with training loss 0.37162566, validation loss 0.44734594 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 03:51:42.909372] Completed epoch 48 with training loss 0.37248808, validation loss 0.41419423 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:54:36.381188] Completed epoch 49 with training loss 0.37293914, validation loss 0.41515324 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 03:57:30.020485] Completed epoch 50 with training loss 0.37162587, validation loss 0.40665877 Validation loss improved to 0.40665877. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:00:24.181881] Completed epoch 51 with training loss 0.37332168, validation loss 0.41077757 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:03:17.964546] Completed epoch 52 with training loss 0.37160888, validation loss 0.43870842 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:23<00:00, 1.56it/s]
[2024-05-01 04:06:13.325482] Completed epoch 53 with training loss 0.37171397, validation loss 0.42218459 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 04:09:06.111046] Completed epoch 54 with training loss 0.37198749, validation loss 0.44544873 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:11:58.793784] Completed epoch 55 with training loss 0.37108874, validation loss 0.45120302 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:14:52.663421] Completed epoch 56 with training loss 0.37239772, validation loss 0.44513977 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:17:46.467011] Completed epoch 57 with training loss 0.37169078, validation loss 0.39432612 Validation loss improved to 0.39432612. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:20:40.338582] Completed epoch 58 with training loss 0.37189272, validation loss 0.40405577 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:23:34.052414] Completed epoch 59 with training loss 0.37220672, validation loss 0.44958824 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:26:27.802429] Completed epoch 60 with training loss 0.37180895, validation loss 0.39468747 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:29:20.628128] Completed epoch 61 with training loss 0.37243289, validation loss 0.39743245 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:32:13.237396] Completed epoch 62 with training loss 0.37195867, validation loss 0.40806380 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:35:07.011353] Completed epoch 63 with training loss 0.37236202, validation loss 0.39828146 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:38:00.536550] Completed epoch 64 with training loss 0.37193924, validation loss 0.43765831 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:40:53.911415] Completed epoch 65 with training loss 0.37149546, validation loss 0.44294035 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:43:49.364958] Completed epoch 66 with training loss 0.37134939, validation loss 0.42501551 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:46:44.825377] Completed epoch 67 with training loss 0.37152666, validation loss 0.41863838 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:49:38.154204] Completed epoch 68 with training loss 0.37064955, validation loss 0.42556402 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 04:52:30.311903] Completed epoch 69 with training loss 0.37131882, validation loss 0.40920451 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 04:55:23.293203] Completed epoch 70 with training loss 0.37145630, validation loss 0.42220843 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 04:58:17.016772] Completed epoch 71 with training loss 0.37127095, validation loss 0.42590562 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 05:01:10.626953] Completed epoch 72 with training loss 0.37124947, validation loss 0.50681019 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.84it/s]
Loss: 0.49449100097020465 AUROC: 0.7608830282158177 AUPRC: 0.4729839271388122 Sensitivity: 0.6630434782608695 Specificity: 0.7278412180532898 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4870406324388804 AUROC: 0.7715152248693373 AUPRC: 0.46057129424846754 Sensitivity: 0.6906300484652665 Specificity: 0.7402585901523202 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4866752442386415 AUROC: 0.7682354671857473 AUPRC: 0.47995706502164776 Sensitivity: 0.7016908212560387 Specificity: 0.6900489396411092 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4728490198376002 AUROC: 0.7779244509244111 AUPRC: 0.4731451293344936 Sensitivity: 0.7277867528271406 Specificity: 0.7026213248317393 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.46253079838222927 AUROC: 0.7705307442345529 AUPRC: 0.4810977231071995 Sensitivity: 0.6835748792270532 Specificity: 0.7128874388254486 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4530472123512515 AUROC: 0.7799110926459724 AUPRC: 0.4778708228561095 Sensitivity: 0.7140549273021002 Specificity: 0.7295430393198725 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4661707820163833 AUROC: 0.771451646163505 AUPRC: 0.4816582767239196 Sensitivity: 0.6871980676328503 Specificity: 0.7115280043501904 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.45247565002905 AUROC: 0.7808613057294769 AUPRC: 0.47919953619983735 Sensitivity: 0.7152665589660743 Specificity: 0.7293659227771874 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4620026962624656 AUROC: 0.7708328407846103 AUPRC: 0.480513628518526 Sensitivity: 0.7222222222222222 Specificity: 0.6753670473083198 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.45280018959332397 AUROC: 0.7806754263529959 AUPRC: 0.4801930230434217 Sensitivity: 0.7023424878836834 Specificity: 0.7461919943322707 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.48076867974466747 AUROC: 0.7703537550601172 AUPRC: 0.48099995232799075 Sensitivity: 0.7185990338164251 Specificity: 0.6761827079934747 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.46539704377452534 AUROC: 0.7805551966966477 AUPRC: 0.4799489776299476 Sensitivity: 0.7419224555735057 Specificity: 0.6888062345023025 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.49126603785488343 AUROC: 0.7701793928122036 AUPRC: 0.4816808998574923 Sensitivity: 0.7016908212560387 Specificity: 0.6976617727025557 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.47022316784218504 AUROC: 0.7805969900488544 AUPRC: 0.4800619304340757 Sensitivity: 0.7253634894991923 Specificity: 0.7163478568898335 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.48189592113097507 AUROC: 0.7704184431257273 AUPRC: 0.478699437679638 Sensitivity: 0.6811594202898551 Specificity: 0.7215878194671017 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.47002024824420613 AUROC: 0.7809781912023152 AUPRC: 0.480540712830595 Sensitivity: 0.7055735056542811 Specificity: 0.7417640807651434 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44270943519141936 AUROC: 0.7716151723395146 AUPRC: 0.47862275412392113 Sensitivity: 0.7077294685990339 Specificity: 0.6971179989124524 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.43436164698666996 AUROC: 0.782303006488932 AUPRC: 0.483397140034173 Sensitivity: 0.7249596122778675 Specificity: 0.7184732554020545 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4729126501414511 AUROC: 0.7706108654934812 AUPRC: 0.47774117502490476 Sensitivity: 0.6835748792270532 Specificity: 0.7245785753126699 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4616736140516069 AUROC: 0.7814775117786794 AUPRC: 0.48153757870642966 Sensitivity: 0.7043618739903069 Specificity: 0.7450407368048175 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.46048641370402443 AUROC: 0.7699827016888511 AUPRC: 0.4761160614138136 Sensitivity: 0.7125603864734299 Specificity: 0.6900489396411092 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.44548400795018234 AUROC: 0.7808311901945536 AUPRC: 0.48130474955725827 Sensitivity: 0.7289983844911146 Specificity: 0.7080233793836345 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4573204782274034 AUROC: 0.7706617621948496 AUPRC: 0.47621594096912584 Sensitivity: 0.7053140096618358 Specificity: 0.7011963023382273 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4440865575991295 AUROC: 0.7816510874211773 AUPRC: 0.4833196237164853 Sensitivity: 0.7197092084006462 Specificity: 0.722281261069784 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.48277249021662605 AUROC: 0.7703918454946896 AUPRC: 0.47671201823182524 Sensitivity: 0.7125603864734299 Specificity: 0.6933115823817292 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.467698204572554 AUROC: 0.7812845684851585 AUPRC: 0.48206795428786703 Sensitivity: 0.7277867528271406 Specificity: 0.7113885936946511 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.47582026653819615 AUROC: 0.7705826260333672 AUPRC: 0.4754445383837783 Sensitivity: 0.6859903381642513 Specificity: 0.722131593257205 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.45903596682129083 AUROC: 0.7815463447323137 AUPRC: 0.48278403434748307 Sensitivity: 0.7039579967689822 Specificity: 0.7431810131066242 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4975035140911738 AUROC: 0.7695650203718152 AUPRC: 0.4747656176909404 Sensitivity: 0.7004830917874396 Specificity: 0.7025557368134856 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.485248570088987 AUROC: 0.7807146623884008 AUPRC: 0.4804433998815006 Sensitivity: 0.7189014539579968 Specificity: 0.7198016294721927 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.5135779562923644 AUROC: 0.7697738610303331 AUPRC: 0.4750697634976519 Sensitivity: 0.716183574879227 Specificity: 0.6832517672648178 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.49337456223589404 AUROC: 0.7808025232097064 AUPRC: 0.4810687967809246 Sensitivity: 0.7354604200323102 Specificity: 0.6972192702798441 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.48323194848166573 AUROC: 0.7702128861253621 AUPRC: 0.4749086367468029 Sensitivity: 0.6944444444444444 Specificity: 0.7145187601957586 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4721506231085018 AUROC: 0.7811507474947594 AUPRC: 0.4815823946771889 Sensitivity: 0.7067851373182552 Specificity: 0.7360963513992207 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4372553452849388 AUROC: 0.7729606512676236 AUPRC: 0.4756515625199532 Sensitivity: 0.7137681159420289 Specificity: 0.6949429037520392 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4219443933279426 AUROC: 0.783890492189418 AUPRC: 0.48598710688382774 Sensitivity: 0.7294022617124394 Specificity: 0.7129826425788168 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.464654852118757 AUROC: 0.7726135686008727 AUPRC: 0.4760115325816157 Sensitivity: 0.717391304347826 Specificity: 0.6916802610114192 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.447044736671227 AUROC: 0.7836487810433224 AUPRC: 0.4859007224392168 Sensitivity: 0.7334410339256866 Specificity: 0.7067835635848388 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4565890845325258 AUROC: 0.7730575191831309 AUPRC: 0.4762217338423616 Sensitivity: 0.6944444444444444 Specificity: 0.720500271886895 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.44153781414583876 AUROC: 0.7838883283059703 AUPRC: 0.4864342310206501 Sensitivity: 0.7084006462035541 Specificity: 0.7396386822529224 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.48039958046542275 AUROC: 0.7722955463087743 AUPRC: 0.47679492473893886 Sensitivity: 0.6956521739130435 Specificity: 0.7161500815660685 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4690354460919345 AUROC: 0.7830680555293267 AUPRC: 0.4837679481448343 Sensitivity: 0.7092084006462036 Specificity: 0.7374247254693589 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4638580944803026 AUROC: 0.7726542859619674 AUPRC: 0.4767376696817002 Sensitivity: 0.7101449275362319 Specificity: 0.7001087547580207 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.44649616477114185 AUROC: 0.7835379937874727 AUPRC: 0.48551124024483144 Sensitivity: 0.7237479806138933 Specificity: 0.7188274884874247 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4397678284181489 AUROC: 0.774528269669769 AUPRC: 0.47804610867203645 Sensitivity: 0.7185990338164251 Specificity: 0.6952147906470908 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4236859026606436 AUROC: 0.7851907178914033 AUPRC: 0.4868616157474282 Sensitivity: 0.7342487883683361 Specificity: 0.7142224583776124 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.4877266552713182 AUROC: 0.7732010150444082 AUPRC: 0.477824892570194 Sensitivity: 0.7065217391304348 Specificity: 0.7052746057640021 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.46808415761700384 AUROC: 0.7833413844104251 AUPRC: 0.4832556361499963 Sensitivity: 0.7176898222940227 Specificity: 0.7267977329082537 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.47390588538514244 AUROC: 0.7734037809353435 AUPRC: 0.478551985822769 Sensitivity: 0.6980676328502415 Specificity: 0.7180532898314301 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.46427242043945527 AUROC: 0.7840371176471597 AUPRC: 0.48529678437633744 Sensitivity: 0.7120355411954766 Specificity: 0.7384874247254694 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.46010703759060967 AUROC: 0.7741841422953558 AUPRC: 0.47763015763209904 Sensitivity: 0.7246376811594203 Specificity: 0.6897770527460576 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.45411173599185767 AUROC: 0.784762716052138 AUPRC: 0.4866571660524948 Sensitivity: 0.7354604200323102 Specificity: 0.704215373715905 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0026.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44073648585213554 AUROC: 0.7759014955092691 AUPRC: 0.47893416823257107 Sensitivity: 0.7210144927536232 Specificity: 0.6971179989124524 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4246245183878475 AUROC: 0.7865084513776462 AUPRC: 0.48723111013751325 Sensitivity: 0.7326332794830371 Specificity: 0.7162592986184909 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0027.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.47495825009213555 AUROC: 0.7746507501188685 AUPRC: 0.4782588348449496 Sensitivity: 0.7198067632850241 Specificity: 0.6935834692767808 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4564632828588839 AUROC: 0.7847396823175885 AUPRC: 0.48530185065381537 Sensitivity: 0.7318255250403877 Specificity: 0.7107686857952533 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0028.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4204951764808761 AUROC: 0.7754903815085388 AUPRC: 0.47992558257844525 Sensitivity: 0.716183574879227 Specificity: 0.6982055464926591 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.40861832309100365 AUROC: 0.7861090986398938 AUPRC: 0.4878824844533256 Sensitivity: 0.7318255250403877 Specificity: 0.7181190223166843 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0029.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.43367507722642684 AUROC: 0.776039737517502 AUPRC: 0.4793863630671941 Sensitivity: 0.7270531400966184 Specificity: 0.6900489396411092 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4234986658449526 AUROC: 0.7861830819938002 AUPRC: 0.487225959278142 Sensitivity: 0.7350565428109854 Specificity: 0.7088204038257173 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0030.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44983553720845115 AUROC: 0.77650109148797 AUPRC: 0.48015529516654415 Sensitivity: 0.6992753623188406 Specificity: 0.7194127243066885 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4403560956319173 AUROC: 0.7862588357977999 AUPRC: 0.4870702928698629 Sensitivity: 0.7152665589660743 Specificity: 0.7376018420120439 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0031.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.48301095681058037 AUROC: 0.7750963425302031 AUPRC: 0.47917396981775656 Sensitivity: 0.6968599033816425 Specificity: 0.7199564980967917 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.45931726956257113 AUROC: 0.785250233627879 AUPRC: 0.48719464736177814 Sensitivity: 0.7136510500807755 Specificity: 0.7386645412681544 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0032.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.45675184743271935 AUROC: 0.7761063957780037 AUPRC: 0.48021146079889226 Sensitivity: 0.7282608695652174 Specificity: 0.6889613920609027 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4404724731489464 AUROC: 0.7859130257628744 AUPRC: 0.48800307127774595 Sensitivity: 0.7386914378029079 Specificity: 0.7054551895147007 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0033.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4246058079103629 AUROC: 0.7768982499415509 AUPRC: 0.4811141068774748 Sensitivity: 0.6992753623188406 Specificity: 0.7218597063621534 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.41588008403778076 AUROC: 0.7870776779077014 AUPRC: 0.48952074346067637 Sensitivity: 0.7197092084006462 Specificity: 0.7385759829968119 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0034.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4551022955112987 AUROC: 0.7770606268372068 AUPRC: 0.481791774233794 Sensitivity: 0.7089371980676329 Specificity: 0.7147906470908102 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.43686101571829233 AUROC: 0.7869029398484753 AUPRC: 0.48854206081161033 Sensitivity: 0.7253634894991923 Specificity: 0.7300743889479278 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0035.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.43746548477146363 AUROC: 0.77748684566544 AUPRC: 0.4827615945442527 Sensitivity: 0.7210144927536232 Specificity: 0.7011963023382273 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.42509956381939074 AUROC: 0.7870573982066307 AUPRC: 0.487210843076282 Sensitivity: 0.7310177705977383 Specificity: 0.7198901877435352 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0036.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4215265495909585 AUROC: 0.7777061940300468 AUPRC: 0.4822121556299614 Sensitivity: 0.7306763285024155 Specificity: 0.6908646003262643 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.41105277946701757 AUROC: 0.7876117100358984 AUPRC: 0.48710021627454686 Sensitivity: 0.7415185783521809 Specificity: 0.7105915692525682 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0037.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4594794395897124 AUROC: 0.7771020009299321 AUPRC: 0.4807582597378228 Sensitivity: 0.6908212560386473 Specificity: 0.732463295269168 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.447756086786588 AUROC: 0.7867474084902633 AUPRC: 0.4866810971431349 Sensitivity: 0.7051696284329564 Specificity: 0.748848742472547 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0038.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.42356007049481076 AUROC: 0.7787336506660573 AUPRC: 0.4834543583806829 Sensitivity: 0.6944444444444444 Specificity: 0.7281131049483415 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4077646690938208 AUROC: 0.7889590940903735 AUPRC: 0.49095581658030996 Sensitivity: 0.7415185783521809 Specificity: 0.7096174282678002 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0039.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4243027302953932 AUROC: 0.7791794072602996 AUPRC: 0.48445055643950596 Sensitivity: 0.7222222222222222 Specificity: 0.7014681892332789 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4159349892978315 AUROC: 0.7887755931973514 AUPRC: 0.4881821407686285 Sensitivity: 0.7358642972536349 Specificity: 0.7167020899752037 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0040.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.42452599108219147 AUROC: 0.7791168535724886 AUPRC: 0.4844328737821203 Sensitivity: 0.6956521739130435 Specificity: 0.7289287656334965 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4159060780096937 AUROC: 0.7884575023305561 AUPRC: 0.48903885383501056 Sensitivity: 0.7427302100161551 Specificity: 0.7077577045696068 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0041.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44213269154230755 AUROC: 0.7791472274104021 AUPRC: 0.4836363160244979 Sensitivity: 0.7270531400966184 Specificity: 0.7001087547580207 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.42674812357182856 AUROC: 0.7881756609823416 AUPRC: 0.4872582136437975 Sensitivity: 0.7350565428109854 Specificity: 0.7182075805880269 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0042.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4627121827668614 AUROC: 0.778293312107767 AUPRC: 0.48298167583351337 Sensitivity: 0.7016908212560387 Specificity: 0.7240348015225666 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4524075964258777 AUROC: 0.7874364712433122 AUPRC: 0.4872517650285482 Sensitivity: 0.7160743134087237 Specificity: 0.741675522493801 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0043.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.41016949713230133 AUROC: 0.7803432998925588 AUPRC: 0.484976051800039 Sensitivity: 0.7125603864734299 Specificity: 0.7153344208809136 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4008929928143819 AUROC: 0.7897628605494791 AUPRC: 0.4914360753568502 Sensitivity: 0.725767366720517 Specificity: 0.7312256464753808 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0044.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4456743498643239 AUROC: 0.7782585053313473 AUPRC: 0.48297779470587116 Sensitivity: 0.714975845410628 Specificity: 0.7085372485046221 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4361056395702892 AUROC: 0.7877446010929149 AUPRC: 0.4893255315414561 Sensitivity: 0.7273828756058158 Specificity: 0.7247608926673751 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0045.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.48136137508683735 AUROC: 0.779210109464028 AUPRC: 0.4839404121983896 Sensitivity: 0.6944444444444444 Specificity: 0.7357259380097879 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4691621826754676 AUROC: 0.78819785420018 AUPRC: 0.48823178620337304 Sensitivity: 0.7112277867528272 Specificity: 0.750177116542685 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0046.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.41084519525369007 AUROC: 0.7788505488962968 AUPRC: 0.48446812439351383 Sensitivity: 0.7041062801932367 Specificity: 0.722131593257205 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.39752321248805084 AUROC: 0.7887838731811218 AUPRC: 0.49047827950731654 Sensitivity: 0.7209208400646203 Specificity: 0.7368048175699611 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0047.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4454021350377136 AUROC: 0.7788182048634916 AUPRC: 0.4841261370503188 Sensitivity: 0.7065217391304348 Specificity: 0.7207721587819467 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4333140106388816 AUROC: 0.7882907760050865 AUPRC: 0.4892661606404624 Sensitivity: 0.718497576736672 Specificity: 0.7386645412681544 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0048.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.41692835258113015 AUROC: 0.7795506248144732 AUPRC: 0.48541230858120527 Sensitivity: 0.7028985507246377 Specificity: 0.7224034801522566 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4008603527866028 AUROC: 0.7889247044385577 AUPRC: 0.4897461148666532 Sensitivity: 0.7180936995153473 Specificity: 0.738753099539497 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0049.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4130522575643327 AUROC: 0.7793780685785437 AUPRC: 0.4855772162365372 Sensitivity: 0.6956521739130435 Specificity: 0.7305600870038064 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4046692679877634 AUROC: 0.7887479455625582 AUPRC: 0.4907356719790222 Sensitivity: 0.7415185783521809 Specificity: 0.7108572440665958 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0050.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 0.39881172796918285 AUROC: 0.7780115742382571 AUPRC: 0.48381624327988293 Sensitivity: 0.7101449275362319 Specificity: 0.7123436650353453 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.3927504108459861 AUROC: 0.7883198364232874 AUPRC: 0.4925094879956653 Sensitivity: 0.7241518578352181 Specificity: 0.7291888062345023 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0051.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.41115997317764497 AUROC: 0.77733727503658 AUPRC: 0.484906173848988 Sensitivity: 0.6980676328502415 Specificity: 0.723762914627515 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4025308161422058 AUROC: 0.7869387601670332 AUPRC: 0.4883659396307304 Sensitivity: 0.7136510500807755 Specificity: 0.7391073326248672 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0052.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4339836256371604 AUROC: 0.7792872754306189 AUPRC: 0.48464549648685046 Sensitivity: 0.7089371980676329 Specificity: 0.7188689505165851 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.42435705965315856 AUROC: 0.7882271292183924 AUPRC: 0.4885945048910971 Sensitivity: 0.7193053311793215 Specificity: 0.7365391427559334 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0053.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4164707660675049 AUROC: 0.7800668158760931 AUPRC: 0.48639719859530545 Sensitivity: 0.7065217391304348 Specificity: 0.7185970636215334 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4107727626407588 AUROC: 0.7890687546961636 AUPRC: 0.48983970349444245 Sensitivity: 0.7241518578352181 Specificity: 0.7343251859723698 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0054.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4510903043879403 AUROC: 0.7811776774291846 AUPRC: 0.48465068010134127 Sensitivity: 0.717391304347826 Specificity: 0.7123436650353453 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.43605671215940406 AUROC: 0.7900284638301696 AUPRC: 0.4896198099990421 Sensitivity: 0.7281906300484653 Specificity: 0.7306942968473256 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0055.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.45261193729109234 AUROC: 0.7811760356001083 AUPRC: 0.4845428864047244 Sensitivity: 0.7185990338164251 Specificity: 0.7096247960848288 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.44129101145598626 AUROC: 0.7902594986257014 AUPRC: 0.48824182388344073 Sensitivity: 0.7306138933764136 Specificity: 0.7282146652497343 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0056.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4438655881418122 AUROC: 0.7798139741983277 AUPRC: 0.48474092226213905 Sensitivity: 0.7246376811594203 Specificity: 0.6930396954866775 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4357909560203552 AUROC: 0.7891218860823023 AUPRC: 0.4900868198449559 Sensitivity: 0.7435379644588045 Specificity: 0.712185618136734 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0057.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.3956108209159639 AUROC: 0.781154691822115 AUPRC: 0.48569424196111194 Sensitivity: 0.7065217391304348 Specificity: 0.7128874388254486 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.38311083821786773 AUROC: 0.7922334431799258 AUPRC: 0.49423266372289204 Sensitivity: 0.7241518578352181 Specificity: 0.7300743889479278 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0058.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.40568679281406933 AUROC: 0.7800169042721705 AUPRC: 0.4857953323185554 Sensitivity: 0.7089371980676329 Specificity: 0.7158781946710169 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.3979954983073252 AUROC: 0.7892138600704919 AUPRC: 0.4901140226260444 Sensitivity: 0.7225363489499192 Specificity: 0.7329968119022316 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0059.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.447651219036844 AUROC: 0.7834287892758351 AUPRC: 0.4857223238458013 Sensitivity: 0.7004830917874396 Specificity: 0.7272974442631865 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4370264006709611 AUROC: 0.7926352995844772 AUPRC: 0.49138679659162826 Sensitivity: 0.7144588045234249 Specificity: 0.7492915338292596 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0060.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.3998831096622679 AUROC: 0.7796891951885214 AUPRC: 0.4863121831693756 Sensitivity: 0.717391304347826 Specificity: 0.7039151712887439 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.3878974324023282 AUROC: 0.7899695561270592 AUPRC: 0.4935321644439091 Sensitivity: 0.7326332794830371 Specificity: 0.7201558625575629 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0061.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.39645228245192105 AUROC: 0.7809373136523998 AUPRC: 0.4858618510036449 Sensitivity: 0.7234299516908212 Specificity: 0.6982055464926591 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.38963383008484487 AUROC: 0.7909951295812095 AUPRC: 0.4923810122665175 Sensitivity: 0.7370759289176091 Specificity: 0.7171448813319165 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0062.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.4128438780705134 AUROC: 0.7797926304203345 AUPRC: 0.4849167860048511 Sensitivity: 0.7222222222222222 Specificity: 0.6992930940728657 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.39705110368905244 AUROC: 0.789847895803969 AUPRC: 0.49164922853194754 Sensitivity: 0.7346526655896607 Specificity: 0.7161707403471484 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0063.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.395375258806679 AUROC: 0.7800175610038012 AUPRC: 0.486190428989976 Sensitivity: 0.7258454106280193 Specificity: 0.6982055464926591 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.3885726372676867 AUROC: 0.7903412612300186 AUPRC: 0.4939900911194957 Sensitivity: 0.7366720516962844 Specificity: 0.7145766914629826 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0064.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.43667326288090813 AUROC: 0.7790281948023632 AUPRC: 0.485577177772051 Sensitivity: 0.714975845410628 Specificity: 0.7050027188689505 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4238263287202076 AUROC: 0.7886170395556463 AUPRC: 0.48865790255294356 Sensitivity: 0.7346526655896607 Specificity: 0.7227240524264966 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0065.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4399115029308531 AUROC: 0.7805426179424334 AUPRC: 0.4858363417816116 Sensitivity: 0.7198067632850241 Specificity: 0.7047308319738989 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4230909397204717 AUROC: 0.7897415078483515 AUPRC: 0.4909831915491652 Sensitivity: 0.7350565428109854 Specificity: 0.7237867516826072 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0066.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.4252910481558906 AUROC: 0.7807048306551818 AUPRC: 0.4875904315838898 Sensitivity: 0.7270531400966184 Specificity: 0.6941272430668842 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4156855922882204 AUROC: 0.7895769811729979 AUPRC: 0.48926541461128303 Sensitivity: 0.7407108239095315 Specificity: 0.7135139922068722 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0067.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.42014243288172615 AUROC: 0.7820557276192428 AUPRC: 0.48721183413902247 Sensitivity: 0.7246376811594203 Specificity: 0.6943991299619359 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4067038314210044 AUROC: 0.7912322447103958 AUPRC: 0.4913565848583541 Sensitivity: 0.7411147011308562 Specificity: 0.7128940843074744 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0068.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 0.4255482860737377 AUROC: 0.7834168039235775 AUPRC: 0.487230637340507 Sensitivity: 0.7222222222222222 Specificity: 0.7041870581837956 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.41592212531853606 AUROC: 0.7934365444934496 AUPRC: 0.4939406753805227 Sensitivity: 0.7378836833602584 Specificity: 0.7207757704569607 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0069.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4088372074895435 AUROC: 0.7821302666593112 AUPRC: 0.4867830551030794 Sensitivity: 0.7210144927536232 Specificity: 0.6976617727025557 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.39604386866644575 AUROC: 0.7913885092853132 AUPRC: 0.4913322981883188 Sensitivity: 0.7378836833602584 Specificity: 0.7162592986184909 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0070.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4378229056795438 AUROC: 0.7833212494713311 AUPRC: 0.48632275575348266 Sensitivity: 0.6968599033816425 Specificity: 0.7286568787384448 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4151849339681643 AUROC: 0.792805441626794 AUPRC: 0.49432037331934975 Sensitivity: 0.7124394184168013 Specificity: 0.7510626992561105 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0071.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.43504492027892006 AUROC: 0.7817465712041569 AUPRC: 0.4868624822849646 Sensitivity: 0.7222222222222222 Specificity: 0.6976617727025557 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.41374945268034935 AUROC: 0.7906335285621171 AUPRC: 0.4896633060690885 Sensitivity: 0.7382875605815832 Specificity: 0.7176762309599717 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0072.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.5022470355033875 AUROC: 0.7802927315570056 AUPRC: 0.4841604618692248 Sensitivity: 0.7258454106280193 Specificity: 0.6905927134312126 Threshold: 0.05 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.48924071438334604 AUROC: 0.7891103692150274 AUPRC: 0.48750036525854484 Sensitivity: 0.7439418416801292 Specificity: 0.7085547290116897 Threshold: 0.05 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 57, 0.3943 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0057.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.38311083821786773 AUROC: 0.7922334431799258 AUPRC: 0.49423266372289204 Sensitivity: 0.7241518578352181 Specificity: 0.7300743889479278 Threshold: 0.18
best_model_val_test_auroc: 0.7922334431799258
best_model_val_test_auprc: 0.49423266372289204 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 68, 0.7934 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0068.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.41592212531853606 AUROC: 0.7934365444934496 AUPRC: 0.4939406753805227 Sensitivity: 0.7378836833602584 Specificity: 0.7207757704569607 Threshold: 0.09
best_model_auroc_test_auroc: 0.7934365444934496
best_model_auroc_test_auprc: 0.4939406753805227
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=1e-1,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.1
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 06:53:44.333928] Completed epoch 0 with training loss 0.43419403, validation loss 0.56762141 Validation loss improved to 0.56762141. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 06:56:38.091577] Completed epoch 1 with training loss 0.39855468, validation loss 0.57177871 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 06:59:33.529524] Completed epoch 2 with training loss 0.38922638, validation loss 0.54377794 Validation loss improved to 0.54377794. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 07:02:26.134889] Completed epoch 3 with training loss 0.38692194, validation loss 0.59764856 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:05:18.460012] Completed epoch 4 with training loss 0.38397166, validation loss 0.69936341 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 07:08:11.858232] Completed epoch 5 with training loss 0.38440049, validation loss 0.71621782 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:11:05.611536] Completed epoch 6 with training loss 0.38293913, validation loss 0.69689363 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:13:59.400722] Completed epoch 7 with training loss 0.38290060, validation loss 0.70034677 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:16:53.065210] Completed epoch 8 with training loss 0.38091624, validation loss 0.67593640 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:19:46.563063] Completed epoch 9 with training loss 0.38080943, validation loss 0.65234399 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 07:22:39.832636] Completed epoch 10 with training loss 0.38001367, validation loss 0.62095714 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 07:25:32.152184] Completed epoch 11 with training loss 0.38059413, validation loss 0.61313313 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:28:25.269234] Completed epoch 12 with training loss 0.38108063, validation loss 0.61022496 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:31:18.691057] Completed epoch 13 with training loss 0.38042119, validation loss 0.55243230 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:34:12.363540] Completed epoch 14 with training loss 0.38047346, validation loss 0.53914237 Validation loss improved to 0.53914237. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:37:05.532507] Completed epoch 15 with training loss 0.37921345, validation loss 0.52608144 Validation loss improved to 0.52608144. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:39:59.245447] Completed epoch 16 with training loss 0.37882823, validation loss 0.53083640 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.39it/s] 100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
[2024-05-01 07:42:55.586286] Completed epoch 17 with training loss 0.37929690, validation loss 0.53374118 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 07:45:47.776814] Completed epoch 18 with training loss 0.37816176, validation loss 0.51576859 Validation loss improved to 0.51576859. Model saved.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 07:48:43.016350] Completed epoch 19 with training loss 0.37804997, validation loss 0.53554320 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:23<00:00, 1.56it/s]
[2024-05-01 07:51:38.391323] Completed epoch 20 with training loss 0.37764573, validation loss 0.53133297 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:54:31.821570] Completed epoch 21 with training loss 0.37747827, validation loss 0.51162010 Validation loss improved to 0.51162010. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 07:57:25.277663] Completed epoch 22 with training loss 0.37838885, validation loss 0.50352824 Validation loss improved to 0.50352824. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 08:00:19.006868] Completed epoch 23 with training loss 0.37842345, validation loss 0.48934540 Validation loss improved to 0.48934540. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 08:03:11.823785] Completed epoch 24 with training loss 0.37803602, validation loss 0.48099512 Validation loss improved to 0.48099512. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 08:06:04.250398] Completed epoch 25 with training loss 0.37743896, validation loss 0.49679294 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:08:57.664623] Completed epoch 26 with training loss 0.37707520, validation loss 0.47876263 Validation loss improved to 0.47876263. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:11:51.256755] Completed epoch 27 with training loss 0.37794012, validation loss 0.46155822 Validation loss improved to 0.46155822. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:14:44.877244] Completed epoch 28 with training loss 0.37722138, validation loss 0.48053285 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:17:38.676188] Completed epoch 29 with training loss 0.37800130, validation loss 0.45332208 Validation loss improved to 0.45332208. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:20:32.166185] Completed epoch 30 with training loss 0.37780187, validation loss 0.45396486 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:23:24.487931] Completed epoch 31 with training loss 0.37772182, validation loss 0.47826010 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:26:17.498290] Completed epoch 32 with training loss 0.37743989, validation loss 0.48510012 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 08:29:11.454539] Completed epoch 33 with training loss 0.37743017, validation loss 0.44634697 Validation loss improved to 0.44634697. Model saved.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
[2024-05-01 08:32:06.394460] Completed epoch 34 with training loss 0.37783101, validation loss 0.46568093 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:41<00:00, 1.31it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 08:35:10.147612] Completed epoch 35 with training loss 0.37797114, validation loss 0.45226336 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 08:38:10.060838] Completed epoch 36 with training loss 0.37739447, validation loss 0.44907296 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 08:41:09.267328] Completed epoch 37 with training loss 0.37763634, validation loss 0.45029989 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 08:44:08.709879] Completed epoch 38 with training loss 0.37796205, validation loss 0.44434643 Validation loss improved to 0.44434643. Model saved.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 08:47:03.813337] Completed epoch 39 with training loss 0.37746301, validation loss 0.44999796 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 08:49:56.406855] Completed epoch 40 with training loss 0.37700018, validation loss 0.44257468 Validation loss improved to 0.44257468. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:52:49.712158] Completed epoch 41 with training loss 0.37813476, validation loss 0.44828117 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 08:55:51.294229] Completed epoch 42 with training loss 0.37709162, validation loss 0.43892229 Validation loss improved to 0.43892229. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 08:58:48.049877] Completed epoch 43 with training loss 0.37660491, validation loss 0.44886705 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 09:01:47.715887] Completed epoch 44 with training loss 0.37785381, validation loss 0.46807405 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 09:04:43.015711] Completed epoch 45 with training loss 0.37821108, validation loss 0.44598266 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 09:07:39.024091] Completed epoch 46 with training loss 0.37732998, validation loss 0.45765102 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 09:10:33.211529] Completed epoch 47 with training loss 0.37832189, validation loss 0.46518490 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 09:13:25.989872] Completed epoch 48 with training loss 0.37742165, validation loss 0.47615612 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 09:16:20.404405] Completed epoch 49 with training loss 0.37797734, validation loss 0.47023672 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
[2024-05-01 09:19:19.195175] Completed epoch 50 with training loss 0.37765646, validation loss 0.44452465 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
[2024-05-01 09:22:20.999969] Completed epoch 51 with training loss 0.37778243, validation loss 0.50632834 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 09:25:19.565080] Completed epoch 52 with training loss 0.37712580, validation loss 0.48539767 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 09:28:19.018594] Completed epoch 53 with training loss 0.37849849, validation loss 0.44507632 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 09:31:18.158784] Completed epoch 54 with training loss 0.37802541, validation loss 0.44354913 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 09:34:17.807562] Completed epoch 55 with training loss 0.37935397, validation loss 0.48991907 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 09:37:18.061013] Completed epoch 56 with training loss 0.37857312, validation loss 0.44239178 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 09:40:16.452960] Completed epoch 57 with training loss 0.37768704, validation loss 0.42782459 Validation loss improved to 0.42782459. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
[2024-05-01 09:43:14.060116] Completed epoch 58 with training loss 0.37747362, validation loss 0.43070006 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 09:46:11.127133] Completed epoch 59 with training loss 0.37742090, validation loss 0.51145303 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 09:49:08.359117] Completed epoch 60 with training loss 0.37712154, validation loss 0.43563601 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 09:52:00.969967] Completed epoch 61 with training loss 0.37709275, validation loss 0.46910545 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 09:54:57.440024] Completed epoch 62 with training loss 0.37697005, validation loss 0.43338609 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 09:57:55.115232] Completed epoch 63 with training loss 0.37613228, validation loss 0.42715088 Validation loss improved to 0.42715088. Model saved.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 10:00:53.708987] Completed epoch 64 with training loss 0.37709042, validation loss 0.46264511 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 10:03:50.860363] Completed epoch 65 with training loss 0.37662363, validation loss 0.43425047 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
[2024-05-01 10:06:47.583625] Completed epoch 66 with training loss 0.37691519, validation loss 0.45223248 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 10:09:45.670320] Completed epoch 67 with training loss 0.37639612, validation loss 0.43768227 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 10:12:43.465224] Completed epoch 68 with training loss 0.37563494, validation loss 0.42679688 Validation loss improved to 0.42679688. Model saved.
100%|██████████| 212/212 [02:37<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:15:43.465184] Completed epoch 69 with training loss 0.37672561, validation loss 0.43704200 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
[2024-05-01 10:18:42.393014] Completed epoch 70 with training loss 0.37659800, validation loss 0.41921169 Validation loss improved to 0.41921169. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:21:40.523600] Completed epoch 71 with training loss 0.37593859, validation loss 0.43805861 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 10:24:39.962320] Completed epoch 72 with training loss 0.37584388, validation loss 0.43129501 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 10:27:38.004251] Completed epoch 73 with training loss 0.37650907, validation loss 0.43893415 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 10:30:37.289378] Completed epoch 74 with training loss 0.37585056, validation loss 0.48648372 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:33:37.747302] Completed epoch 75 with training loss 0.37543380, validation loss 0.44375449 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:36:36.186053] Completed epoch 76 with training loss 0.37484172, validation loss 0.42292249 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:39:36.011465] Completed epoch 77 with training loss 0.37598276, validation loss 0.42978266 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 10:42:37.128877] Completed epoch 78 with training loss 0.37564203, validation loss 0.47177675 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 10:45:37.342085] Completed epoch 79 with training loss 0.37568074, validation loss 0.53045630 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 10:48:34.905097] Completed epoch 80 with training loss 0.37530547, validation loss 0.45095572 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 10:51:30.296191] Completed epoch 81 with training loss 0.37584543, validation loss 0.44978297 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 10:54:25.651342] Completed epoch 82 with training loss 0.37543872, validation loss 0.49467739 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 10:57:21.839194] Completed epoch 83 with training loss 0.37703913, validation loss 0.48741648 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 11:00:17.568177] Completed epoch 84 with training loss 0.37603155, validation loss 0.44698837 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 11:03:17.043088] Completed epoch 85 with training loss 0.37641913, validation loss 0.48439783 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.5683926625384225 AUROC: 0.7378939733051727 AUPRC: 0.42603464425516313 Sensitivity: 0.7222222222222222 Specificity: 0.6237085372485046 Threshold: 0.4 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5633382245346352 AUROC: 0.7523882298760985 AUPRC: 0.4175470243509898 Sensitivity: 0.7431340872374798 Specificity: 0.6369996457669146 Threshold: 0.4 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.5731217364470164 AUROC: 0.762107504340996 AUPRC: 0.47226268898134766 Sensitivity: 0.6956521739130435 Specificity: 0.6946710168569875 Threshold: 0.51 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.5610577557925824 AUROC: 0.7745981328654481 AUPRC: 0.46290642230653645 Sensitivity: 0.7164781906300485 Specificity: 0.7110343606092809 Threshold: 0.51 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.5446498642365137 AUROC: 0.7629776737514875 AUPRC: 0.47656216411835134 Sensitivity: 0.7041062801932367 Specificity: 0.6916802610114192 Threshold: 0.49 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.5336344970597161 AUROC: 0.7748206015438611 AUPRC: 0.46729364000197376 Sensitivity: 0.7189014539579968 Specificity: 0.7065178887708112 Threshold: 0.49 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.596780902809567 AUROC: 0.761886842513128 AUPRC: 0.47819541489912837 Sensitivity: 0.6871980676328503 Specificity: 0.7082653616095704 Threshold: 0.53 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5867284318363225 AUROC: 0.7736600625659179 AUPRC: 0.46668902374543886 Sensitivity: 0.7047657512116317 Specificity: 0.7251151257527453 Threshold: 0.53 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.6992886347903146 AUROC: 0.7651164844893124 AUPRC: 0.4800360754024481 Sensitivity: 0.6727053140096618 Specificity: 0.7232191408374117 Threshold: 0.5700000000000001 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.6926877322020354 AUROC: 0.7769718235907789 AUPRC: 0.47294492768717367 Sensitivity: 0.6974959612277868 Specificity: 0.7411441728657456 Threshold: 0.5700000000000001 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.7164582345220778 AUROC: 0.7652749209951848 AUPRC: 0.4803514509488937 Sensitivity: 0.7234299516908212 Specificity: 0.6718325176726482 Threshold: 0.55 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.7124562638777273 AUROC: 0.776807082315414 AUPRC: 0.4751801198511172 Sensitivity: 0.6958804523424879 Specificity: 0.7455720864328729 Threshold: 0.56 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6963221497005887 AUROC: 0.7088043740953522 AUPRC: 0.4460507183484746 Sensitivity: 0.7101449275362319 Specificity: 0.5570962479608483 Threshold: 0.51 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.6952161893800453 AUROC: 0.7105214486988659 AUPRC: 0.43236800610099735 Sensitivity: 0.7197092084006462 Specificity: 0.5617251151257527 Threshold: 0.51 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.7008311268356111 AUROC: 0.7299550729891534 AUPRC: 0.4609236622006322 Sensitivity: 0.6666666666666666 Specificity: 0.6873300706905927 Threshold: 0.52 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.6987119394320028 AUROC: 0.7440339945016615 AUPRC: 0.4564475022634621 Sensitivity: 0.6946688206785138 Specificity: 0.6919943322706341 Threshold: 0.52 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
Loss: 0.6762116998434067 AUROC: 0.6836682993015002 AUPRC: 0.41788720896925413 Sensitivity: 0.6002415458937198 Specificity: 0.6740076128330614 Threshold: 0.5 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.6744008676873313 AUROC: 0.6856538497525233 AUPRC: 0.4006647190225575 Sensitivity: 0.6009693053311793 Specificity: 0.6705632306057385 Threshold: 0.5 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.6526188237799538 AUROC: 0.6913666716578271 AUPRC: 0.4314059902511754 Sensitivity: 0.6183574879227053 Specificity: 0.655247417074497 Threshold: 0.48 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.6510535130898157 AUROC: 0.6974163088569145 AUPRC: 0.41692184826136724 Sensitivity: 0.624394184168013 Specificity: 0.6501948281969536 Threshold: 0.48 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.6203739162948396 AUROC: 0.7343246368930815 AUPRC: 0.4476234094734984 Sensitivity: 0.6570048309178744 Specificity: 0.7194127243066885 Threshold: 0.46 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.6166662982216587 AUROC: 0.75356836898841 AUPRC: 0.44316014129471193 Sensitivity: 0.6861873990306947 Specificity: 0.7371590506553312 Threshold: 0.46 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.613969185286098 AUROC: 0.7555790993845111 AUPRC: 0.4669960918653875 Sensitivity: 0.7004830917874396 Specificity: 0.6952147906470908 Threshold: 0.46 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.6093447219442438 AUROC: 0.7715265807866034 AUPRC: 0.4673763218413152 Sensitivity: 0.7241518578352181 Specificity: 0.7120970598653914 Threshold: 0.46 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.6100081503391266 AUROC: 0.7045518726045714 AUPRC: 0.4017782111150751 Sensitivity: 0.5857487922705314 Specificity: 0.6511691136487221 Threshold: 0.46 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.6058978890931165 AUROC: 0.71222272605536 AUPRC: 0.40803578624659315 Sensitivity: 0.5836025848142165 Specificity: 0.6664895501239816 Threshold: 0.46 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.5523882773187425 AUROC: 0.5873170674043076 AUPRC: 0.2753026126423947 Sensitivity: 0.49879227053140096 Specificity: 0.5799347471451876 Threshold: 0.38 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.5475939299773287 AUROC: 0.5933410975617432 AUPRC: 0.28193872958669597 Sensitivity: 0.574313408723748 Specificity: 0.5253276656039674 Threshold: 0.37 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.5418470684025023 AUROC: 0.5959998804748432 AUPRC: 0.270004089108861 Sensitivity: 0.5253623188405797 Specificity: 0.5734094616639478 Threshold: 0.37 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.5343247701724371 AUROC: 0.6073741499693551 AUPRC: 0.28086451453441635 Sensitivity: 0.5169628432956381 Specificity: 0.5833333333333334 Threshold: 0.37 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
Loss: 0.5261547681358125 AUROC: 0.5565498472442227 AUPRC: 0.30168599712011956 Sensitivity: 0.48429951690821255 Specificity: 0.5527460576400217 Threshold: 0.34 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.5196760796286441 AUROC: 0.5598306977590608 AUPRC: 0.3068305742242174 Sensitivity: 0.531502423263328 Specificity: 0.49176408076514344 Threshold: 0.33 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.5302023390928904 AUROC: 0.5248924273589144 AUPRC: 0.23186203362733257 Sensitivity: 0.47705314009661837 Specificity: 0.5114192495921697 Threshold: 0.34 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.5264010785354508 AUROC: 0.5351817583409302 AUPRC: 0.24314779631453726 Sensitivity: 0.4894991922455573 Specificity: 0.5278072972015586 Threshold: 0.34 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5307177727421125 AUROC: 0.46579216282741354 AUPRC: 0.23967771410320535 Sensitivity: 0.4335748792270531 Specificity: 0.4926590538336052 Threshold: 0.33 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5251328407062424 AUROC: 0.48401417690594856 AUPRC: 0.25280255700453913 Sensitivity: 0.49596122778675283 Specificity: 0.4450938717676231 Threshold: 0.32 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5175685841176245 AUROC: 0.5132840390571436 AUPRC: 0.22984045133984238 Sensitivity: 0.5144927536231884 Specificity: 0.47172376291462753 Threshold: 0.31 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.5109412802590264 AUROC: 0.5278953726228757 AUPRC: 0.2397243623761822 Sensitivity: 0.5036348949919225 Specificity: 0.4901700318809777 Threshold: 0.31 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5337546418110529 AUROC: 0.42505477141798864 AUPRC: 0.179317847896368 Sensitivity: 0.427536231884058 Specificity: 0.4613920609026645 Threshold: 0.32 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5271435010212439 AUROC: 0.43813040541661874 AUPRC: 0.17827519652169987 Sensitivity: 0.42932148626817446 Specificity: 0.4754693588381155 Threshold: 0.32 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.64it/s]
Loss: 0.5294788554310799 AUROC: 0.3873912124053978 AUPRC: 0.17565408138841465 Sensitivity: 0.40700483091787437 Specificity: 0.41381185426862427 Threshold: 0.3 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.5252760709435852 AUROC: 0.3955231290169545 AUPRC: 0.17849690696525722 Sensitivity: 0.4018578352180937 Specificity: 0.4294190577399929 Threshold: 0.3 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.509985075228744 AUROC: 0.4323415372248623 AUPRC: 0.21160363631986367 Sensitivity: 0.427536231884058 Specificity: 0.433115823817292 Threshold: 0.27 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.5030787247750494 AUROC: 0.4486604703059395 AUPRC: 0.21804962029075212 Sensitivity: 0.4349757673667205 Specificity: 0.45208997520368405 Threshold: 0.27 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5017036770780882 AUROC: 0.4728786254869665 AUPRC: 0.23723995406030485 Sensitivity: 0.4323671497584541 Specificity: 0.48694942903752036 Threshold: 0.28 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.49283913688527214 AUROC: 0.49703236797664235 AUPRC: 0.2565961730742732 Sensitivity: 0.49676898222940225 Specificity: 0.440843074743181 Threshold: 0.27 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4924878891971376 AUROC: 0.5876817176421758 AUPRC: 0.22451763455487606 Sensitivity: 0.5434782608695652 Specificity: 0.5424143556280587 Threshold: 0.27 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.48048229957068406 AUROC: 0.6060077738138772 AUPRC: 0.23620882881553446 Sensitivity: 0.5726978998384491 Specificity: 0.5614594403117251 Threshold: 0.27 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.48142802384164596 AUROC: 0.5598556700895521 AUPRC: 0.21773844072328064 Sensitivity: 0.5531400966183575 Specificity: 0.5116911364872213 Threshold: 0.25 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.47381070063070013 AUROC: 0.5770344832174208 AUPRC: 0.2282732560755365 Sensitivity: 0.5573505654281099 Specificity: 0.5300212539851222 Threshold: 0.25 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.4988359792364968 AUROC: 0.48710064149545673 AUPRC: 0.20005146391431727 Sensitivity: 0.4867149758454106 Specificity: 0.46737357259380097 Threshold: 0.26 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.48879137800799477 AUROC: 0.5045220156720958 AUPRC: 0.20474740315593976 Sensitivity: 0.49596122778675283 Specificity: 0.4775947573503365 Threshold: 0.26 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0026.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.51it/s]
Loss: 0.4805383235216141 AUROC: 0.5207835859123184 AUPRC: 0.2592841426473377 Sensitivity: 0.47342995169082125 Specificity: 0.5206634040239261 Threshold: 0.24 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4714675407718729 AUROC: 0.5431398063277818 AUPRC: 0.273232085394267 Sensitivity: 0.4882875605815832 Specificity: 0.538788522848034 Threshold: 0.24 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0027.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.46489539328548646 AUROC: 0.6582013959487539 AUPRC: 0.293255542646896 Sensitivity: 0.5688405797101449 Specificity: 0.6296900489396411 Threshold: 0.25 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.45507269766595626 AUROC: 0.6805138039311288 AUPRC: 0.30802051160166105 Sensitivity: 0.6009693053311793 Specificity: 0.6502833864682961 Threshold: 0.25 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0028.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.4845607769158151 AUROC: 0.5515396416346838 AUPRC: 0.22806769218833972 Sensitivity: 0.5277777777777778 Specificity: 0.5154975530179445 Threshold: 0.25 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.47562335072844114 AUROC: 0.5689859276757903 AUPRC: 0.24810875097783902 Sensitivity: 0.5395799676898223 Specificity: 0.5258590152320227 Threshold: 0.25 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0029.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4586900538868374 AUROC: 0.6576118151274191 AUPRC: 0.2810039282290017 Sensitivity: 0.5603864734299517 Specificity: 0.6313213703099511 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.44583993763835345 AUROC: 0.6755571338194167 AUPRC: 0.2899731853056048 Sensitivity: 0.5864297253634895 Specificity: 0.6439071909316331 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0030.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.458844405081537 AUROC: 0.6520611193859296 AUPRC: 0.2763888720347383 Sensitivity: 0.5845410628019324 Specificity: 0.5831973898858075 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4473388719337958 AUROC: 0.6684086643753108 AUPRC: 0.27801114148766676 Sensitivity: 0.6179321486268174 Specificity: 0.6054729011689692 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0031.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.47571711407767403 AUROC: 0.5504870650138045 AUPRC: 0.21069015201210622 Sensitivity: 0.5495169082125604 Specificity: 0.4921152800435019 Threshold: 0.23 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.470841898686356 AUROC: 0.5725882928826618 AUPRC: 0.22275962352767656 Sensitivity: 0.5638126009693053 Specificity: 0.5090329436769394 Threshold: 0.23 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0032.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.48608753830194473 AUROC: 0.506820157983361 AUPRC: 0.23949321409227062 Sensitivity: 0.47101449275362317 Specificity: 0.5576400217509516 Threshold: 0.24 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.4782045894750842 AUROC: 0.511556604043522 AUPRC: 0.2442247398691954 Sensitivity: 0.5464458804523424 Specificity: 0.43800921006021964 Threshold: 0.23 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0033.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.44829430679480237 AUROC: 0.732062032242896 AUPRC: 0.34825906001571516 Sensitivity: 0.6956521739130435 Specificity: 0.6778140293637847 Threshold: 0.17 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.43423692633708316 AUROC: 0.746804140864592 AUPRC: 0.3684684486668276 Sensitivity: 0.6861873990306947 Specificity: 0.7257350336521431 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0034.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.46276499413781697 AUROC: 0.6544003974539828 AUPRC: 0.27022219988445606 Sensitivity: 0.6002415458937198 Specificity: 0.6046764545948885 Threshold: 0.22 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4553670921811351 AUROC: 0.6682621104509061 AUPRC: 0.2713868135230051 Sensitivity: 0.6029886914378029 Specificity: 0.6236273467941906 Threshold: 0.22 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0035.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4479965666929881 AUROC: 0.7250688254748827 AUPRC: 0.34319042560384583 Sensitivity: 0.6908212560386473 Specificity: 0.6941272430668842 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.43971356804724093 AUROC: 0.745125038842602 AUPRC: 0.36645012016585576 Sensitivity: 0.7136510500807755 Specificity: 0.707137796670209 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0036.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4489823066525989 AUROC: 0.7330504133468883 AUPRC: 0.34165826096851243 Sensitivity: 0.7125603864734299 Specificity: 0.668026101141925 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4414519049503185 AUROC: 0.7453426074874229 AUPRC: 0.35685149522907467 Sensitivity: 0.721324717285945 Specificity: 0.6792419411973079 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0037.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.54it/s]
Loss: 0.4501106060213513 AUROC: 0.7511371308183138 AUPRC: 0.42140237387397295 Sensitivity: 0.6630434782608695 Specificity: 0.7161500815660685 Threshold: 0.14 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.4410963891832917 AUROC: 0.759710167662697 AUPRC: 0.4262785166243912 Sensitivity: 0.6853796445880452 Specificity: 0.7241409847679773 Threshold: 0.14 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0038.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.43641485770543414 AUROC: 0.7717611309444063 AUPRC: 0.41599241605380527 Sensitivity: 0.7415458937198067 Specificity: 0.6835236541598695 Threshold: 0.17 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4291247657879635 AUROC: 0.783662032594022 AUPRC: 0.43722494451385174 Sensitivity: 0.7394991922455574 Specificity: 0.706163655685441 Threshold: 0.17 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0039.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.450438030064106 AUROC: 0.7361510075576676 AUPRC: 0.3514837268243322 Sensitivity: 0.6992753623188406 Specificity: 0.6740076128330614 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4426027340469537 AUROC: 0.7431222305868538 AUPRC: 0.35401573276171405 Sensitivity: 0.7112277867528272 Specificity: 0.6860609280906836 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0040.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44438838130897945 AUROC: 0.7262496289466287 AUPRC: 0.31152319071713763 Sensitivity: 0.6896135265700483 Specificity: 0.7017400761283307 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 0.4330776684262134 AUROC: 0.7426158103267815 AUPRC: 0.3325252794088465 Sensitivity: 0.6946688206785138 Specificity: 0.7192702798441375 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0041.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4505154217282931 AUROC: 0.7117948015751051 AUPRC: 0.29056918228193335 Sensitivity: 0.6582125603864735 Specificity: 0.709352909189777 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.44059819396999145 AUROC: 0.724085743148394 AUPRC: 0.30590189706421467 Sensitivity: 0.7330371567043619 Specificity: 0.6690577399929153 Threshold: 0.19 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0042.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.439537117878596 AUROC: 0.748272959994536 AUPRC: 0.3943197770743288 Sensitivity: 0.6751207729468599 Specificity: 0.7050027188689505 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4307770190967454 AUROC: 0.7592269242038483 AUPRC: 0.3970870183105635 Sensitivity: 0.721324717285945 Specificity: 0.6884520014169323 Threshold: 0.17 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0043.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4485977134770817 AUROC: 0.7068026560854066 AUPRC: 0.28808105002450496 Sensitivity: 0.7004830917874396 Specificity: 0.6593257205002719 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4414895979894532 AUROC: 0.7225559490842874 AUPRC: 0.3040545174828972 Sensitivity: 0.7096122778675282 Specificity: 0.6815444562522139 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0044.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.466417879694038 AUROC: 0.7548302611427655 AUPRC: 0.454241218883878 Sensitivity: 0.7210144927536232 Specificity: 0.6370309951060359 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.46000255340779267 AUROC: 0.7651308924155779 AUPRC: 0.4386982516845746 Sensitivity: 0.6647819063004846 Specificity: 0.7489373007438895 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0045.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.4455600513352288 AUROC: 0.7636015688005191 AUPRC: 0.39934605815689583 Sensitivity: 0.6884057971014492 Specificity: 0.7229472539423599 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4366614526068723 AUROC: 0.7756235811362584 AUPRC: 0.4014386514206364 Sensitivity: 0.7015347334410339 Specificity: 0.7309599716613532 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0046.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.54it/s]
Loss: 0.46113720784584683 AUROC: 0.7527390634481563 AUPRC: 0.38138326147417795 Sensitivity: 0.6702898550724637 Specificity: 0.7079934747145188 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.4504850179784828 AUROC: 0.7620455701693394 AUPRC: 0.4007499568665934 Sensitivity: 0.6898222940226171 Specificity: 0.7175876726886291 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0047.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.46397315959135693 AUROC: 0.7636908843022752 AUPRC: 0.44458881205083767 Sensitivity: 0.677536231884058 Specificity: 0.7126155519303969 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4562339050074418 AUROC: 0.7687536982735286 AUPRC: 0.4220393322933185 Sensitivity: 0.7298061389337641 Specificity: 0.6798618490967057 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0048.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.481421838204066 AUROC: 0.747244846626895 AUPRC: 0.4094276212003047 Sensitivity: 0.6871980676328503 Specificity: 0.6653072321914084 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.465179573468588 AUROC: 0.7576580729376796 AUPRC: 0.4184218642570383 Sensitivity: 0.7144588045234249 Specificity: 0.677647892313142 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0049.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.46633778346909416 AUROC: 0.7419067677507993 AUPRC: 0.4134003713786327 Sensitivity: 0.6497584541062802 Specificity: 0.7199564980967917 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.45781272442804444 AUROC: 0.7486985224646153 AUPRC: 0.4032399843122282 Sensitivity: 0.6567043618739903 Specificity: 0.7277718738930216 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0050.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4476533515585793 AUROC: 0.7787702634544609 AUPRC: 0.4576678400008986 Sensitivity: 0.7137681159420289 Specificity: 0.699021207177814 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.43746074523638795 AUROC: 0.7852162731760859 AUPRC: 0.4496110795955187 Sensitivity: 0.7419224555735057 Specificity: 0.6997874601487779 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0051.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.5088118240237236 AUROC: 0.7439475612927631 AUPRC: 0.3906424678004238 Sensitivity: 0.6352657004830918 Specificity: 0.722131593257205 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.49908857558060576 AUROC: 0.7451266304593527 AUPRC: 0.35985671146637915 Sensitivity: 0.635702746365105 Specificity: 0.7107686857952533 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0052.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4802539489335484 AUROC: 0.76934485109267 AUPRC: 0.4245951515573104 Sensitivity: 0.7222222222222222 Specificity: 0.6954866775421424 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.472586281735588 AUROC: 0.7662473310911924 AUPRC: 0.41490644422753997 Sensitivity: 0.7039579967689822 Specificity: 0.6878320935175345 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0053.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4418853380613857 AUROC: 0.7764324630325764 AUPRC: 0.46430447773918915 Sensitivity: 0.6968599033816425 Specificity: 0.711799891245242 Threshold: 0.16 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.43402901443618314 AUROC: 0.7849516892454492 AUPRC: 0.4558514728609739 Sensitivity: 0.7051696284329564 Specificity: 0.7260007084661707 Threshold: 0.16 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0054.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.44870152324438095 AUROC: 0.7514832283876187 AUPRC: 0.4378308499804733 Sensitivity: 0.678743961352657 Specificity: 0.689505165851006 Threshold: 0.17 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.4385465085506439 AUROC: 0.7579033786339651 AUPRC: 0.4317349117911726 Sensitivity: 0.6853796445880452 Specificity: 0.7083776124690047 Threshold: 0.17 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0055.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.48680800365077126 AUROC: 0.7865996537710845 AUPRC: 0.467220443370476 Sensitivity: 0.716183574879227 Specificity: 0.7088091353996737 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.47626493484885607 AUROC: 0.7825155320334868 AUPRC: 0.437908678191079 Sensitivity: 0.721324717285945 Specificity: 0.7068721218561813 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0056.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.43925943556759095 AUROC: 0.779539788742569 AUPRC: 0.4595094627602992 Sensitivity: 0.6932367149758454 Specificity: 0.7253942359978249 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.4307397759898945 AUROC: 0.787539783265434 AUPRC: 0.4410860265714787 Sensitivity: 0.7088045234248789 Specificity: 0.730428622033298 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0057.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.42559706833627486 AUROC: 0.778418747849204 AUPRC: 0.4532245457171048 Sensitivity: 0.7089371980676329 Specificity: 0.7041870581837956 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 0.41919947553564 AUROC: 0.7843073884780968 AUPRC: 0.4422555230452818 Sensitivity: 0.7160743134087237 Specificity: 0.7160821820758059 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0058.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.4298534591992696 AUROC: 0.7696920979423284 AUPRC: 0.45957654576499724 Sensitivity: 0.6847826086956522 Specificity: 0.7145187601957586 Threshold: 0.18 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.4213959013146383 AUROC: 0.7786616019633325 AUPRC: 0.4565363927685516 Sensitivity: 0.7015347334410339 Specificity: 0.7308714133900106 Threshold: 0.18 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0059.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.5207877109448115 AUROC: 0.7634439532091848 AUPRC: 0.42131341462989974 Sensitivity: 0.6956521739130435 Specificity: 0.7126155519303969 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.65it/s]
Loss: 0.5043553171886338 AUROC: 0.7606588606627878 AUPRC: 0.39716113644819784 Sensitivity: 0.6809369951534734 Specificity: 0.7088204038257173 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0060.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.4362982345951928 AUROC: 0.7713168519963328 AUPRC: 0.4391510528589136 Sensitivity: 0.7185990338164251 Specificity: 0.6881457313757476 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.42552329075557216 AUROC: 0.7779292973080003 AUPRC: 0.42644788147056234 Sensitivity: 0.7350565428109854 Specificity: 0.6943854055968828 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0061.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.46532146135965985 AUROC: 0.7692767151859996 AUPRC: 0.443225600778277 Sensitivity: 0.6823671497584541 Specificity: 0.7039151712887439 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.4576370019327711 AUROC: 0.7762393937521066 AUPRC: 0.43497944010843514 Sensitivity: 0.7084006462035541 Specificity: 0.7104144527098831 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0062.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
Loss: 0.4330498319533136 AUROC: 0.7460825958237122 AUPRC: 0.4090325476828159 Sensitivity: 0.6992753623188406 Specificity: 0.6756389342033714 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.425532941740972 AUROC: 0.7553113145137708 AUPRC: 0.3984960524289548 Sensitivity: 0.7043618739903069 Specificity: 0.6895147006730429 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0063.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.42922082874510026 AUROC: 0.7557050276746708 AUPRC: 0.4229785288030629 Sensitivity: 0.6932367149758454 Specificity: 0.6827079934747146 Threshold: 0.15 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.41666514550646144 AUROC: 0.7643218861395289 AUPRC: 0.41053068850277336 Sensitivity: 0.7039579967689822 Specificity: 0.6997874601487779 Threshold: 0.15 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0064.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4575785497824351 AUROC: 0.7640330414817966 AUPRC: 0.4289697638118022 Sensitivity: 0.6944444444444444 Specificity: 0.6818923327895595 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4486409113914878 AUROC: 0.7709175817211149 AUPRC: 0.41966642545580063 Sensitivity: 0.7225363489499192 Specificity: 0.685618136733971 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0065.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.42940447562270695 AUROC: 0.7540661538906095 AUPRC: 0.4623886544328974 Sensitivity: 0.6763285024154589 Specificity: 0.690320826536161 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4204599413054961 AUROC: 0.7609808143297869 AUPRC: 0.44908793605787123 Sensitivity: 0.697092084006462 Specificity: 0.7072263549415515 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0066.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.444102222720782 AUROC: 0.7672396978509114 AUPRC: 0.4650369217024168 Sensitivity: 0.711352657004831 Specificity: 0.6840674279499728 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.43330782003424784 AUROC: 0.7780510112810932 AUPRC: 0.45289548181435735 Sensitivity: 0.7265751211631664 Specificity: 0.7001416932341481 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0067.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.44763029863437015 AUROC: 0.7736587569909082 AUPRC: 0.4359697739877242 Sensitivity: 0.6956521739130435 Specificity: 0.7120717781402937 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.4337507153826731 AUROC: 0.7811226527766094 AUPRC: 0.43205021447156855 Sensitivity: 0.7197092084006462 Specificity: 0.7138682252922423 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0068.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4241672133406003 AUROC: 0.7743693406151737 AUPRC: 0.44952237649199833 Sensitivity: 0.714975845410628 Specificity: 0.6965742251223491 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4177320727871524 AUROC: 0.779782153805831 AUPRC: 0.43379030434681104 Sensitivity: 0.7237479806138933 Specificity: 0.7058094226000708 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0069.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.442691864238845 AUROC: 0.7515899472775847 AUPRC: 0.40964451394928253 Sensitivity: 0.6642512077294686 Specificity: 0.7052746057640021 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.43165492956285123 AUROC: 0.7585670828190086 AUPRC: 0.4027948150353304 Sensitivity: 0.6781098546042004 Specificity: 0.713159759121502 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0070.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.42170297271675533 AUROC: 0.7610651070603903 AUPRC: 0.43448681193969824 Sensitivity: 0.6908212560386473 Specificity: 0.6943991299619359 Threshold: 0.2 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4115281640379517 AUROC: 0.7669347485774881 AUPRC: 0.42218639560271315 Sensitivity: 0.697092084006462 Specificity: 0.7107686857952533 Threshold: 0.2 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0071.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:24<00:00, 1.46it/s]
Loss: 0.4350316599011421 AUROC: 0.7579920955780944 AUPRC: 0.42163053733856715 Sensitivity: 0.6727053140096618 Specificity: 0.7098966829798804 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.4230889239244991 AUROC: 0.7643070429720785 AUPRC: 0.4043245511510777 Sensitivity: 0.6882067851373183 Specificity: 0.7194473963868225 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0072.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.4357975307438109 AUROC: 0.7667228500576609 AUPRC: 0.43418987019855565 Sensitivity: 0.6823671497584541 Specificity: 0.7090810222947254 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.42395122983941325 AUROC: 0.77451862356125 AUPRC: 0.42515654216429416 Sensitivity: 0.7051696284329564 Specificity: 0.7145766914629826 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0073.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.43739432841539383 AUROC: 0.7519578811736056 AUPRC: 0.4134897890276338 Sensitivity: 0.6871980676328503 Specificity: 0.6740076128330614 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.4261324608491527 AUROC: 0.7617758537217649 AUPRC: 0.40490462540957994 Sensitivity: 0.7160743134087237 Specificity: 0.6894261424017003 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0074.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.48722512274980545 AUROC: 0.7671863384059284 AUPRC: 0.4352397621635261 Sensitivity: 0.6678743961352657 Specificity: 0.7376291462751495 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:09<00:00, 1.55it/s]
Loss: 0.4785003343390094 AUROC: 0.7699149168181743 AUPRC: 0.41340570507149577 Sensitivity: 0.6643780290791599 Specificity: 0.7409670563230606 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0075.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 0.44265689618057674 AUROC: 0.7762814147575479 AUPRC: 0.46387190919051247 Sensitivity: 0.7101449275362319 Specificity: 0.6968461120174008 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.4326296025128276 AUROC: 0.7858118239741977 AUPRC: 0.45194866806476874 Sensitivity: 0.7386914378029079 Specificity: 0.6987247608926673 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0076.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4257696701420678 AUROC: 0.7639772192931991 AUPRC: 0.45596992188730484 Sensitivity: 0.6884057971014492 Specificity: 0.7058183795541055 Threshold: 0.22 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.59it/s]
Loss: 0.41753237280580735 AUROC: 0.7708847300360472 AUPRC: 0.43599767895527164 Sensitivity: 0.7003231017770598 Specificity: 0.7179419057739993 Threshold: 0.22 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0077.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4342733100056648 AUROC: 0.7597514796163636 AUPRC: 0.432136900566385 Sensitivity: 0.6714975845410628 Specificity: 0.7052746057640021 Threshold: 0.12 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.42107493996068285 AUROC: 0.7694693356613143 AUPRC: 0.4212870448718144 Sensitivity: 0.7051696284329564 Specificity: 0.7130712008501594 Threshold: 0.12 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0078.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.46310827632745105 AUROC: 0.7692425651412105 AUPRC: 0.4473500278329372 Sensitivity: 0.6690821256038647 Specificity: 0.7292006525285482 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.45466324452448775 AUROC: 0.7778770064385726 AUPRC: 0.4352522599452716 Sensitivity: 0.6906300484652665 Specificity: 0.7345023025150549 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0079.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.5283390697505739 AUROC: 0.7663164973612524 AUPRC: 0.4252727974427306 Sensitivity: 0.6328502415458938 Specificity: 0.7631865144100054 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:09<00:00, 1.55it/s]
Loss: 0.5211483749250571 AUROC: 0.7641289786126767 AUPRC: 0.4013232707935823 Sensitivity: 0.627221324717286 Specificity: 0.7631951824300389 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0080.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4546584130989181 AUROC: 0.7084940683999128 AUPRC: 0.384483688851873 Sensitivity: 0.6545893719806763 Specificity: 0.6364872213159326 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.4462952263377331 AUROC: 0.7169088570861208 AUPRC: 0.37217910680816435 Sensitivity: 0.6720516962843296 Specificity: 0.6539142755933404 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0081.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.45688312086794114 AUROC: 0.7627936247120232 AUPRC: 0.4224155107568789 Sensitivity: 0.711352657004831 Specificity: 0.6699293094072866 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.44302213923246775 AUROC: 0.7692056637807256 AUPRC: 0.4094041018234691 Sensitivity: 0.6821486268174475 Specificity: 0.7275947573503365 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0082.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4945620579851998 AUROC: 0.7696349622904699 AUPRC: 0.43566394262826047 Sensitivity: 0.6763285024154589 Specificity: 0.7183251767264818 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.4825970769204475 AUROC: 0.7727668973187588 AUPRC: 0.42041937527799866 Sensitivity: 0.6934571890145396 Specificity: 0.7261778250088559 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0083.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4906047027972009 AUROC: 0.781514909121477 AUPRC: 0.4578375652006121 Sensitivity: 0.716183574879227 Specificity: 0.7058183795541055 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4781684173202073 AUROC: 0.7851728345571257 AUPRC: 0.4418657768030487 Sensitivity: 0.7217285945072698 Specificity: 0.7127169677647892 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0084.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.443852986726496 AUROC: 0.7283109453520475 AUPRC: 0.3904477054777896 Sensitivity: 0.6702898550724637 Specificity: 0.6601413811854269 Threshold: 0.13 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 0.4371465132744224 AUROC: 0.7382663151804615 AUPRC: 0.38157891700735463 Sensitivity: 0.6865912762520194 Specificity: 0.6702975557917109 Threshold: 0.13 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0085.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.4893338564369414 AUROC: 0.7670756791261792 AUPRC: 0.4381434988455223 Sensitivity: 0.7065217391304348 Specificity: 0.6799891245241979 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4733742027095071 AUROC: 0.7735442322098022 AUPRC: 0.424542334009593 Sensitivity: 0.7269789983844911 Specificity: 0.6905773999291533 Threshold: 0.07 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 70, 0.4192 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0070.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.4115281640379517 AUROC: 0.7669347485774881 AUPRC: 0.42218639560271315 Sensitivity: 0.697092084006462 Specificity: 0.7107686857952533 Threshold: 0.2
best_model_val_test_auroc: 0.7669347485774881
best_model_val_test_auprc: 0.42218639560271315 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 56, 0.7875 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.1_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0056.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 0.4307397759898945 AUROC: 0.787539783265434 AUPRC: 0.4410860265714787 Sensitivity: 0.7088045234248789 Specificity: 0.730428622033298 Threshold: 0.13
best_model_auroc_test_auroc: 0.787539783265434
best_model_auroc_test_auprc: 0.4410860265714787
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=1e-3,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.001
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:36<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 13:17:35.312474] Completed epoch 0 with training loss 0.39792994, validation loss 0.46845832 Validation loss improved to 0.46845832. Model saved.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 13:20:32.522869] Completed epoch 1 with training loss 0.38316965, validation loss 0.47801733 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.37it/s] 100%|██████████| 36/36 [00:23<00:00, 1.53it/s]
[2024-05-01 13:23:31.124564] Completed epoch 2 with training loss 0.38149470, validation loss 0.44603640 Validation loss improved to 0.44603640. Model saved.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 13:26:31.476116] Completed epoch 3 with training loss 0.38017458, validation loss 0.50024891 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 13:29:26.275623] Completed epoch 4 with training loss 0.37987363, validation loss 0.46677998 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:29<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 13:32:17.310474] Completed epoch 5 with training loss 0.37685850, validation loss 0.45748127 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:35:09.381726] Completed epoch 6 with training loss 0.37537360, validation loss 0.46958792 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:38:01.921949] Completed epoch 7 with training loss 0.37458894, validation loss 0.49083984 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:40:56.322315] Completed epoch 8 with training loss 0.37479639, validation loss 0.48468149 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:43:48.604749] Completed epoch 9 with training loss 0.37357727, validation loss 0.46809483 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:46:40.459271] Completed epoch 10 with training loss 0.37142140, validation loss 0.50691807 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:29<00:00, 1.41it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 13:49:31.420080] Completed epoch 11 with training loss 0.37196594, validation loss 0.47094113 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:30<00:00, 1.41it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 13:52:22.850482] Completed epoch 12 with training loss 0.37232172, validation loss 0.43037853 Validation loss improved to 0.43037853. Model saved.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 13:55:22.343102] Completed epoch 13 with training loss 0.37154371, validation loss 0.46282107 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 13:58:20.354389] Completed epoch 14 with training loss 0.37079147, validation loss 0.47211546 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
[2024-05-01 14:01:22.939732] Completed epoch 15 with training loss 0.36996749, validation loss 0.51182657 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 14:04:23.958708] Completed epoch 16 with training loss 0.36900502, validation loss 0.53405821 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
[2024-05-01 14:07:24.957530] Completed epoch 17 with training loss 0.36902094, validation loss 0.49442551 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 14:10:27.126319] Completed epoch 18 with training loss 0.36902848, validation loss 0.45933783 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:40<00:00, 1.32it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 14:13:29.498843] Completed epoch 19 with training loss 0.36787274, validation loss 0.52738476 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:39<00:00, 1.33it/s] 100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
[2024-05-01 14:16:30.945910] Completed epoch 20 with training loss 0.36832651, validation loss 0.51123810 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:36<00:00, 1.35it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 14:19:29.596996] Completed epoch 21 with training loss 0.36643237, validation loss 0.47083390 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:23<00:00, 1.52it/s]
[2024-05-01 14:22:27.897823] Completed epoch 22 with training loss 0.36703420, validation loss 0.46505818 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 14:25:24.078539] Completed epoch 23 with training loss 0.36597341, validation loss 0.53274894 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:38<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 14:28:25.227128] Completed epoch 24 with training loss 0.36530784, validation loss 0.49367607 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:40<00:00, 1.32it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 14:31:27.601876] Completed epoch 25 with training loss 0.36460119, validation loss 0.45166624 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.34it/s] 100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
[2024-05-01 14:34:27.846536] Completed epoch 26 with training loss 0.36417621, validation loss 0.53098446 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:41<00:00, 1.31it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 14:37:31.626243] Completed epoch 27 with training loss 0.36375138, validation loss 0.48155302 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.4680738126238187 AUROC: 0.7734587822094028 AUPRC: 0.4769442455597843 Sensitivity: 0.6980676328502415 Specificity: 0.723762914627515 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.45564721814460224 AUROC: 0.7842803488766691 AUPRC: 0.4903430882821409 Sensitivity: 0.7132471728594507 Specificity: 0.7405242649663478 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.4729060373372502 AUROC: 0.7756328922723702 AUPRC: 0.4776012913285448 Sensitivity: 0.7065217391304348 Specificity: 0.7069059271343121 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4611875744605506 AUROC: 0.786273142465222 AUPRC: 0.49112852410340757 Sensitivity: 0.722940226171244 Specificity: 0.726532058094226 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.4509792584511969 AUROC: 0.7764721952962254 AUPRC: 0.4758754445823103 Sensitivity: 0.6944444444444444 Specificity: 0.7270255573681349 Threshold: 0.11 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.43461543487177956 AUROC: 0.7866148393332635 AUPRC: 0.4888663864152283 Sensitivity: 0.7039579967689822 Specificity: 0.747874601487779 Threshold: 0.11 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.5022265877988603 AUROC: 0.7779633044634109 AUPRC: 0.47845994118312873 Sensitivity: 0.7198067632850241 Specificity: 0.7071778140293637 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.49055093678611295 AUROC: 0.7876237812865357 AUPRC: 0.48817575512406225 Sensitivity: 0.7233441033925686 Specificity: 0.7272405242649663 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.46829646246300805 AUROC: 0.7776878055443911 AUPRC: 0.4763940749666522 Sensitivity: 0.7270531400966184 Specificity: 0.6968461120174008 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 0.45267595033402797 AUROC: 0.7876340999704139 AUPRC: 0.4864389649201686 Sensitivity: 0.7322294022617124 Specificity: 0.7151080410910379 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4492973718378279 AUROC: 0.7782210716284055 AUPRC: 0.47667259289922587 Sensitivity: 0.7185990338164251 Specificity: 0.7028276237085372 Threshold: 0.09 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4392256477364787 AUROC: 0.7881536823645146 AUPRC: 0.48644256534569785 Sensitivity: 0.7281906300484653 Specificity: 0.7205986539142756 Threshold: 0.09 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 0.47131070329083335 AUROC: 0.7775594145106167 AUPRC: 0.47559226852154735 Sensitivity: 0.7246376811594203 Specificity: 0.6968461120174008 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.4599135860248848 AUROC: 0.7872114988981005 AUPRC: 0.4821454585235289 Sensitivity: 0.72859450726979 Specificity: 0.7144881331916401 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.4929405459099346 AUROC: 0.7768798614558952 AUPRC: 0.4753731757399012 Sensitivity: 0.7125603864734299 Specificity: 0.7066340402392605 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.47993244065178764 AUROC: 0.7862417929802333 AUPRC: 0.47783170278331233 Sensitivity: 0.7180936995153473 Specificity: 0.7235210768685795 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 0.47892830934789443 AUROC: 0.7766409753252792 AUPRC: 0.4750970802482479 Sensitivity: 0.6871980676328503 Specificity: 0.7319195214790647 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4689822384604701 AUROC: 0.7863507382526522 AUPRC: 0.47907249051692813 Sensitivity: 0.7362681744749596 Specificity: 0.7024442082890542 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4769204250640339 AUROC: 0.777024506597526 AUPRC: 0.4720229384499063 Sensitivity: 0.7198067632850241 Specificity: 0.7036432843936922 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 0.4606622778983028 AUROC: 0.7860774451382224 AUPRC: 0.47265328157970754 Sensitivity: 0.7160743134087237 Specificity: 0.7188274884874247 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.5069897323846817 AUROC: 0.7747712603730761 AUPRC: 0.46974480272086067 Sensitivity: 0.7306763285024155 Specificity: 0.6840674279499728 Threshold: 0.05 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.4893456131771759 AUROC: 0.7839325537916388 AUPRC: 0.46741354467681623 Sensitivity: 0.7294022617124394 Specificity: 0.6981934112646121 Threshold: 0.05 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 0.4759562876489427 AUROC: 0.7737247585197795 AUPRC: 0.468644179906267 Sensitivity: 0.7053140096618358 Specificity: 0.7139749864056553 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.4626932467023532 AUROC: 0.7827499110125287 AUPRC: 0.46859388538782865 Sensitivity: 0.7063812600969306 Specificity: 0.7293659227771874 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.43675465716256034 AUROC: 0.772002479818637 AUPRC: 0.4646037493000921 Sensitivity: 0.7137681159420289 Specificity: 0.7058183795541055 Threshold: 0.1 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.4267510546854249 AUROC: 0.780787215075565 AUPRC: 0.46331212590862 Sensitivity: 0.710016155088853 Specificity: 0.7197130712008502 Threshold: 0.1 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 0.4663875599702199 AUROC: 0.7716005600607345 AUPRC: 0.4639509434070448 Sensitivity: 0.7222222222222222 Specificity: 0.6957585644371941 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 0.4541075168936341 AUROC: 0.7802145370619943 AUPRC: 0.4607171805413223 Sensitivity: 0.7193053311793215 Specificity: 0.7085547290116897 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 0.47784052126937443 AUROC: 0.7728138717481934 AUPRC: 0.46374381208557564 Sensitivity: 0.7077294685990339 Specificity: 0.7104404567699837 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.46013250546874823 AUROC: 0.7808057243265423 AUPRC: 0.4609778220473695 Sensitivity: 0.7079967689822294 Specificity: 0.7221927027984414 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 0.5081053136123551 AUROC: 0.7717085924139616 AUPRC: 0.46157094785782393 Sensitivity: 0.7089371980676329 Specificity: 0.7033713974986405 Threshold: 0.05 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.495552602465506 AUROC: 0.7789514729286378 AUPRC: 0.4551475706235648 Sensitivity: 0.7116316639741519 Specificity: 0.7151965993623804 Threshold: 0.05 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5320899229910638 AUROC: 0.7732391054789806 AUPRC: 0.46255498752823887 Sensitivity: 0.7137681159420289 Specificity: 0.7041870581837956 Threshold: 0.04 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.5191570724602099 AUROC: 0.7809563735344965 AUPRC: 0.45719834099770895 Sensitivity: 0.7136510500807755 Specificity: 0.7157279489904357 Threshold: 0.04 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.48787300205892986 AUROC: 0.7701147047465935 AUPRC: 0.45453480417841724 Sensitivity: 0.7016908212560387 Specificity: 0.7047308319738989 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.47624527676789846 AUROC: 0.7772093500366537 AUPRC: 0.449189203102138 Sensitivity: 0.7092084006462036 Specificity: 0.7155508324477506 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.4542144474883874 AUROC: 0.7670996498306946 AUPRC: 0.44760194517865537 Sensitivity: 0.7077294685990339 Specificity: 0.7017400761283307 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.4490602869126532 AUROC: 0.7736512281987848 AUPRC: 0.4445461747511469 Sensitivity: 0.7035541195476576 Specificity: 0.71413390010627 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.5428372969230016 AUROC: 0.7699168643428873 AUPRC: 0.44900375593261904 Sensitivity: 0.7318840579710145 Specificity: 0.6824361065796629 Threshold: 0.04 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5187540648987999 AUROC: 0.775913613051572 AUPRC: 0.44423380912691857 Sensitivity: 0.72859450726979 Specificity: 0.6948281969535954 Threshold: 0.04 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 0.5021669363809956 AUROC: 0.7695075563541413 AUPRC: 0.450599326693871 Sensitivity: 0.711352657004831 Specificity: 0.700652528548124 Threshold: 0.05 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.49738058378850974 AUROC: 0.7755669088499327 AUPRC: 0.4452622262048117 Sensitivity: 0.7096122778675282 Specificity: 0.7109458023379384 Threshold: 0.05 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.4766596754391988 AUROC: 0.766975527552519 AUPRC: 0.44920405259565505 Sensitivity: 0.6847826086956522 Specificity: 0.723762914627515 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 0.46501986075330665 AUROC: 0.7731239738542791 AUPRC: 0.4435761607220195 Sensitivity: 0.7277867528271406 Specificity: 0.683669854764435 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4615176088280148 AUROC: 0.7679721178018929 AUPRC: 0.44790570448342026 Sensitivity: 0.7198067632850241 Specificity: 0.688689505165851 Threshold: 0.07 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.45285136597575965 AUROC: 0.77395991243175 AUPRC: 0.44351263974328387 Sensitivity: 0.7164781906300485 Specificity: 0.6996103436060928 Threshold: 0.07 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 0.5341989522178968 AUROC: 0.7699971497847233 AUPRC: 0.4496747892391529 Sensitivity: 0.7016908212560387 Specificity: 0.7112561174551386 Threshold: 0.04 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.5218059123942146 AUROC: 0.7763461751410781 AUPRC: 0.44352916067406 Sensitivity: 0.7019386106623586 Specificity: 0.7206872121856182 Threshold: 0.04 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.48626013182931477 AUROC: 0.7674342545964647 AUPRC: 0.44317681110346396 Sensitivity: 0.6992753623188406 Specificity: 0.7090810222947254 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4798603219290574 AUROC: 0.7735364350760571 AUPRC: 0.4402732366369154 Sensitivity: 0.6983037156704361 Specificity: 0.7208643287283032 Threshold: 0.06 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.45348230169879067 AUROC: 0.7647964920023221 AUPRC: 0.43677044584309366 Sensitivity: 0.7089371980676329 Specificity: 0.689505165851006 Threshold: 0.08 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 0.4443732229647813 AUROC: 0.7698483729313275 AUPRC: 0.43584752517802783 Sensitivity: 0.7132471728594507 Specificity: 0.7003188097768331 Threshold: 0.08 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0026.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 0.5272455968790584 AUROC: 0.7673964925277076 AUPRC: 0.4412836301714102 Sensitivity: 0.7210144927536232 Specificity: 0.6835236541598695 Threshold: 0.04 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.5186444377457654 AUROC: 0.7717498899817274 AUPRC: 0.4353670103659464 Sensitivity: 0.7217285945072698 Specificity: 0.6930570315267446 Threshold: 0.04 Intermediate Model: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0027.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 0.4854743778705597 AUROC: 0.7656761840214568 AUPRC: 0.4378249877245987 Sensitivity: 0.7258454106280193 Specificity: 0.6837955410549211 Threshold: 0.06 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 0.4722739187655626 AUROC: 0.7702906277880118 AUPRC: 0.43331415116990096 Sensitivity: 0.7180936995153473 Specificity: 0.6942968473255402 Threshold: 0.06 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 12, 0.4304 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0012.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.4267510546854249 AUROC: 0.780787215075565 AUPRC: 0.46331212590862 Sensitivity: 0.710016155088853 Specificity: 0.7197130712008502 Threshold: 0.1
best_model_val_test_auroc: 0.780787215075565
best_model_val_test_auprc: 0.46331212590862 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 5, 0.7882 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_0.001_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_0005.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.64it/s]
Loss: 0.4392256477364787 AUROC: 0.7881536823645146 AUPRC: 0.48644256534569785 Sensitivity: 0.7281906300484653 Specificity: 0.7205986539142756 Threshold: 0.09
best_model_auroc_test_auroc: 0.7881536823645146
best_model_auroc_test_auprc: 0.48644256534569785
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=6,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 6
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
)
(abpFc): Linear(in_features=15000, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 15:25:19.262676] Completed epoch 0 with training loss 17.72831917, validation loss 18.47956657 Validation loss improved to 18.47956657. Model saved.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 15:28:12.523617] Completed epoch 1 with training loss 17.89421082, validation loss 18.64983940 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 15:31:06.531684] Completed epoch 2 with training loss 17.91244125, validation loss 18.47956657 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
[2024-05-01 15:34:00.939556] Completed epoch 3 with training loss 17.88379669, validation loss 18.30929565 Validation loss improved to 18.30929565. Model saved.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
[2024-05-01 15:36:56.722087] Completed epoch 4 with training loss 17.90462875, validation loss 18.64983940 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:34<00:00, 1.37it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 15:39:53.328458] Completed epoch 5 with training loss 17.90202332, validation loss 18.39443207 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
[2024-05-01 15:42:48.762976] Completed epoch 6 with training loss 17.90723419, validation loss 18.22415924 Validation loss improved to 18.22415924. Model saved.
100%|██████████| 212/212 [02:33<00:00, 1.39it/s] 100%|██████████| 36/36 [00:24<00:00, 1.50it/s]
[2024-05-01 15:45:45.907562] Completed epoch 7 with training loss 17.90462875, validation loss 18.22415924 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:42<00:00, 1.31it/s] 100%|██████████| 36/36 [00:23<00:00, 1.54it/s]
[2024-05-01 15:48:51.449588] Completed epoch 8 with training loss 17.89942169, validation loss 18.64983940 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:42<00:00, 1.30it/s] 100%|██████████| 36/36 [00:23<00:00, 1.51it/s]
[2024-05-01 15:51:57.801248] Completed epoch 9 with training loss 17.90202332, validation loss 18.47956657 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:44<00:00, 1.29it/s] 100%|██████████| 36/36 [00:24<00:00, 1.47it/s]
[2024-05-01 15:55:06.826936] Completed epoch 10 with training loss 17.90983582, validation loss 18.13902283 Validation loss improved to 18.13902283. Model saved.
100%|██████████| 212/212 [02:42<00:00, 1.31it/s] 100%|██████████| 36/36 [00:23<00:00, 1.50it/s]
[2024-05-01 15:58:13.118920] Completed epoch 11 with training loss 17.89681625, validation loss 18.22415924 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:40<00:00, 1.32it/s] 100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
[2024-05-01 16:01:17.194306] Completed epoch 12 with training loss 17.90723419, validation loss 18.47956657 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:40<00:00, 1.32it/s] 100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
[2024-05-01 16:04:21.012860] Completed epoch 13 with training loss 17.89681625, validation loss 18.30929565 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:35<00:00, 1.36it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 16:07:18.057040] Completed epoch 14 with training loss 17.90462875, validation loss 18.73497581 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 16:10:11.770995] Completed epoch 15 with training loss 17.90723419, validation loss 18.39443207 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 16:13:05.131389] Completed epoch 16 with training loss 17.88900375, validation loss 18.30929565 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:33<00:00, 1.38it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 16:16:00.441892] Completed epoch 17 with training loss 17.92546082, validation loss 18.47956657 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 16:18:53.615314] Completed epoch 18 with training loss 17.90202332, validation loss 18.30929565 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 16:21:46.758745] Completed epoch 19 with training loss 17.90202332, validation loss 18.30929565 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
[2024-05-01 16:24:39.926367] Completed epoch 20 with training loss 17.89160919, validation loss 18.39443207 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 16:27:32.839756] Completed epoch 21 with training loss 17.88639832, validation loss 18.30929565 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:32<00:00, 1.39it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 16:30:26.458952] Completed epoch 22 with training loss 17.88900375, validation loss 18.47956657 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 16:33:19.738584] Completed epoch 23 with training loss 17.90202332, validation loss 18.39443207 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:31<00:00, 1.40it/s] 100%|██████████| 36/36 [00:23<00:00, 1.53it/s]
[2024-05-01 16:36:14.633489] Completed epoch 24 with training loss 17.88900375, validation loss 18.64983940 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:37<00:00, 1.35it/s] 100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
[2024-05-01 16:39:14.459526] Completed epoch 25 with training loss 17.90983582, validation loss 18.73497581 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 18.30929488605923 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 18.224158657921684 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
Loss: 18.394431114196777 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.58it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:10<00:00, 1.53it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 18.05388622151481 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:10<00:00, 1.54it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.52it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:11<00:00, 1.51it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 18.224158657921684 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:09<00:00, 1.55it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 18.64983977211846 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 18.224158657921684 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:09<00:00, 1.55it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.58it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 18.224158657921684 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:11<00:00, 1.52it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.55it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:10<00:00, 1.53it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.59it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:15<00:00, 1.43it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.51it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:14<00:00, 1.44it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.50it/s]
Loss: 18.30929488605923 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:13<00:00, 1.46it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:23<00:00, 1.53it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:10<00:00, 1.54it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 18.30929488605923 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.66it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.61it/s]
Loss: 18.394431114196777 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.59it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.62it/s]
Loss: 18.56470357047187 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:09<00:00, 1.56it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:24<00:00, 1.50it/s]
Loss: 18.47956731584337 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.64it/s]
Loss: 18.30929488605923 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 10, 18.14 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0
best_model_val_test_auroc: 0.5
best_model_val_test_auprc: 0.5899186519465427 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 0, 0.5 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_6_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.60it/s]
Loss: 17.939011061633074 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 0.0 Specificity: 1.0 Threshold: 0.0
best_model_auroc_test_auroc: 0.5
best_model_auroc_test_auprc: 0.5899186519465427
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=1,
skip_connection=False,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 1
skip_connection: False
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
)
(abpFc): Linear(in_features=30000, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:18<00:00, 1.53it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 17:24:51.462803] Completed epoch 0 with training loss 9.01953125, validation loss 9.53373909 Validation loss improved to 9.53373909. Model saved.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 17:27:29.594345] Completed epoch 1 with training loss 9.22287655, validation loss 9.43365765 Validation loss improved to 9.43365765. Model saved.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 17:30:06.890347] Completed epoch 2 with training loss 9.22364902, validation loss 9.27761364 Validation loss improved to 9.27761364. Model saved.
100%|██████████| 212/212 [02:17<00:00, 1.54it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 17:32:45.891743] Completed epoch 3 with training loss 9.22875118, validation loss 9.45322800 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 17:35:23.080070] Completed epoch 4 with training loss 9.22301579, validation loss 9.32640553 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 17:38:00.767428] Completed epoch 5 with training loss 9.22170544, validation loss 9.42722225 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:19<00:00, 1.52it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 17:40:41.766133] Completed epoch 6 with training loss 9.21363640, validation loss 9.32889366 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 17:43:19.231522] Completed epoch 7 with training loss 9.22630501, validation loss 9.46603870 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:19<00:00, 1.53it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 17:45:59.742101] Completed epoch 8 with training loss 9.21892071, validation loss 9.46670246 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:19<00:00, 1.52it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 17:48:40.997168] Completed epoch 9 with training loss 9.21124172, validation loss 9.29793167 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 17:51:18.776743] Completed epoch 10 with training loss 9.21278095, validation loss 9.20365906 Validation loss improved to 9.20365906. Model saved.
100%|██████████| 212/212 [02:14<00:00, 1.58it/s] 100%|██████████| 36/36 [00:20<00:00, 1.73it/s]
[2024-05-01 17:53:54.114880] Completed epoch 11 with training loss 9.21671486, validation loss 9.21492386 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
[2024-05-01 17:56:32.026611] Completed epoch 12 with training loss 9.20886707, validation loss 9.51096344 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:17<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 17:59:10.604632] Completed epoch 13 with training loss 9.21073723, validation loss 9.24962521 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:18<00:00, 1.53it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:01:50.583528] Completed epoch 14 with training loss 9.21117401, validation loss 9.49262619 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:17<00:00, 1.54it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 18:04:29.104623] Completed epoch 15 with training loss 9.20001793, validation loss 9.40825462 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:20<00:00, 1.51it/s] 100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
[2024-05-01 18:07:11.214745] Completed epoch 16 with training loss 9.19772053, validation loss 9.41944504 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
[2024-05-01 18:09:49.511765] Completed epoch 17 with training loss 9.19873714, validation loss 9.36987972 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:17<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:12:27.764337] Completed epoch 18 with training loss 9.19390202, validation loss 9.45592594 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:18<00:00, 1.53it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 18:15:07.397085] Completed epoch 19 with training loss 9.19039536, validation loss 9.43966484 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:17<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:17:45.652285] Completed epoch 20 with training loss 9.19632721, validation loss 9.36256695 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:20:23.440440] Completed epoch 21 with training loss 9.18558216, validation loss 9.23729324 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:23:01.054298] Completed epoch 22 with training loss 9.17259598, validation loss 9.34210014 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.55it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 18:25:38.698334] Completed epoch 23 with training loss 9.17584419, validation loss 9.33071232 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 18:28:16.159020] Completed epoch 24 with training loss 9.16915035, validation loss 9.31958389 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:16<00:00, 1.56it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 18:30:53.471937] Completed epoch 25 with training loss 9.16422558, validation loss 9.42515945 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
Loss: 9.469923416773478 AUROC: 0.7606275596115302 AUPRC: 0.47012504560176693 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.130151958377272 AUROC: 0.7720112906788629 AUPRC: 0.46215757966516374 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 9.260262926419577 AUROC: 0.7606124547840272 AUPRC: 0.47011454161603605 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 9.130232989788055 AUROC: 0.7720017230950242 AUPRC: 0.46215835332023203 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 9.43397151099311 AUROC: 0.7606078576626133 AUPRC: 0.4701078381913476 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.130293371500793 AUROC: 0.7719976993448119 AUPRC: 0.4621565851499252 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
Loss: 9.410708586374918 AUROC: 0.7606165593567182 AUPRC: 0.47008538587896365 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.12993018494712 AUROC: 0.7720006679783019 AUPRC: 0.4621753311203901 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.338108857472738 AUROC: 0.7605622148142894 AUPRC: 0.470060046027441 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 9.131708869227657 AUROC: 0.7719497183589451 AUPRC: 0.4621081660753035 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.65it/s]
Loss: 9.515599409739176 AUROC: 0.7605586027903213 AUPRC: 0.47004779087455856 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.13175897024296 AUROC: 0.7719414383751746 AUPRC: 0.46210807005825394 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.32555713918474 AUROC: 0.7605549907663532 AUPRC: 0.47004605365129626 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 9.13207655262064 AUROC: 0.7719298142078943 AUPRC: 0.4620989057247404 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 9.400466601053873 AUROC: 0.760535945549067 AUPRC: 0.47003806374520896 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.132660728913766 AUROC: 0.771905779006625 AUPRC: 0.4620744978736594 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.43881807062361 AUROC: 0.7605495727304012 AUPRC: 0.47004820142789533 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.131339519112199 AUROC: 0.7719327291913815 AUPRC: 0.46213186923622207 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.298220952351889 AUROC: 0.7605566325954296 AUPRC: 0.47005425483335583 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:05<00:00, 1.66it/s]
Loss: 9.130751208022788 AUROC: 0.7719434949586165 AUPRC: 0.462155352164885 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.334589454862806 AUROC: 0.7605081986376758 AUPRC: 0.47000323185531984 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.132153855429756 AUROC: 0.7718915796392087 AUPRC: 0.46210417849163943 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 9.381484905878702 AUROC: 0.7604942430905266 AUPRC: 0.4699960413621932 Sensitivity: 0.9975845410628019 Specificity: 0.0032626427406199023 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.13249675432841 AUROC: 0.7718713357048066 AUPRC: 0.46209303185715467 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 9.372206979327732 AUROC: 0.7604600930457375 AUPRC: 0.46995686457925845 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 9.133596857388815 AUROC: 0.7718264127691012 AUPRC: 0.46204403285950546 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.385261548890007 AUROC: 0.7604164203923052 AUPRC: 0.4699047935448344 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 9.134493894047207 AUROC: 0.7717869442503507 AUPRC: 0.46200681825297896 Sensitivity: 0.9995961227786753 Specificity: 0.002656748140276302 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 9.29403629567888 AUROC: 0.7604768397023167 AUPRC: 0.46999234701951337 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.66it/s]
Loss: 9.13170274981746 AUROC: 0.7718528979871664 AUPRC: 0.4621072755445262 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.44086186091105 AUROC: 0.7604438389378811 AUPRC: 0.46997850402675423 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.132000806155029 AUROC: 0.7718202251354411 AUPRC: 0.4621062942641606 Sensitivity: 0.9995961227786753 Specificity: 0.002656748140276302 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0016.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.289723581737942 AUROC: 0.7604453165840498 AUPRC: 0.4700108830319356 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.131625577255532 AUROC: 0.7718111046349596 AUPRC: 0.4621192374252952 Sensitivity: 0.9995961227786753 Specificity: 0.002656748140276302 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0017.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.466414266162449 AUROC: 0.7604236444402414 AUPRC: 0.470002909839318 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.131334878780224 AUROC: 0.7717960468674979 AUPRC: 0.4621249452346481 Sensitivity: 0.9995961227786753 Specificity: 0.0027453064116188452 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0018.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.331770804193285 AUROC: 0.760423316074426 AUPRC: 0.4700125159547049 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.57it/s]
Loss: 9.13078839028323 AUROC: 0.7717934895506964 AUPRC: 0.4621421041230852 Sensitivity: 0.9995961227786753 Specificity: 0.0028338646829613886 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0019.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.63it/s]
Loss: 9.431715382470024 AUROC: 0.7603967184433884 AUPRC: 0.470001066938004 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 9.130206066149253 AUROC: 0.7717678269660079 AUPRC: 0.46215218092919685 Sensitivity: 0.9995961227786753 Specificity: 0.002922422954303932 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0020.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 9.29028167989519 AUROC: 0.760423808623149 AUPRC: 0.4700613003084644 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.128447978584855 AUROC: 0.7717848697835745 AUPRC: 0.4621926595367076 Sensitivity: 0.9995961227786753 Specificity: 0.002922422954303932 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0021.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 9.247494412793053 AUROC: 0.7603704491781661 AUPRC: 0.47000679195840356 Sensitivity: 0.9975845410628019 Specificity: 0.0035345296356715608 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 9.128803427572604 AUROC: 0.771728984363957 AUPRC: 0.4621600855383304 Sensitivity: 0.9995961227786753 Specificity: 0.0030109812256464753 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0022.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 9.297905100716484 AUROC: 0.7603701208123508 AUPRC: 0.47002866964557893 Sensitivity: 0.9975845410628019 Specificity: 0.0038064165307232192 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 9.127250373363495 AUROC: 0.7717271244971922 AUPRC: 0.4621944352619011 Sensitivity: 0.9995961227786753 Specificity: 0.0030995394969890186 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0023.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.385520882076687 AUROC: 0.7603712700927042 AUPRC: 0.4700855303223344 Sensitivity: 0.9975845410628019 Specificity: 0.0038064165307232192 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 9.125954495535957 AUROC: 0.7717167879299797 AUPRC: 0.4622045162293915 Sensitivity: 0.9995961227786753 Specificity: 0.0032766560396741058 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0024.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 9.365076396200392 AUROC: 0.760348120302727 AUPRC: 0.47009526665317347 Sensitivity: 0.9975845410628019 Specificity: 0.0038064165307232192 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.61it/s]
Loss: 9.12567224325957 AUROC: 0.7716847946449573 AUPRC: 0.46223231636008916 Sensitivity: 0.9991922455573505 Specificity: 0.003365214311016649 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0025.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 9.311216367615593 AUROC: 0.7604009871989871 AUPRC: 0.4701793193546867 Sensitivity: 0.9975845410628019 Specificity: 0.0038064165307232192 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.63it/s]
Loss: 9.122152586778006 AUROC: 0.771721831030246 AUPRC: 0.46230216022178644 Sensitivity: 0.9991922455573505 Specificity: 0.003365214311016649 Threshold: 0.0 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 10, 9.204 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.132153855429756 AUROC: 0.7718915796392087 AUPRC: 0.46210417849163943 Sensitivity: 0.9995961227786753 Specificity: 0.0025681898689337585 Threshold: 0.0
best_model_val_test_auroc: 0.7718915796392087
best_model_val_test_auprc: 0.46210417849163943 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 0, 0.772 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_1_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 9.130151958377272 AUROC: 0.7720112906788629 AUPRC: 0.46215757966516374 Sensitivity: 0.9995961227786753 Specificity: 0.002479631597591215 Threshold: 0.0
best_model_auroc_test_auroc: 0.7720112906788629
best_model_auroc_test_auprc: 0.46215757966516374
run_experiment(
experimentNamePrefix=None,
useAbp=True,
useEeg=False,
useEcg=False,
nResiduals=12,
skip_connection=True,
batch_size=128,
learning_rate=1e-4,
weight_decay=0.0,
balance_labels=False,
#pos_weight=2.0,
pos_weight=None,
max_epochs=100,
patience=15,
device=device
)
Experiment Setup
name: ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES
prediction_window: 003
max_cases: _ALL
use_abp: True
use_eeg: False
use_ecg: False
n_residuals: 12
skip_connection: True
batch_size: 128
learning_rate: 0.0001
weight_decay: 0.0
balance_labels: False
max_epochs: 100
patience: 15
device: mps
Model Architecture
HypotensionCNN(
(abpResiduals): Sequential(
(0): ResidualBlock(
(bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(1): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(2): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(3): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(4): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(5): ResidualBlock(
(bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
(residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
)
(6): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(7): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(8): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(9): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
(10): ResidualBlock(
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(11): ResidualBlock(
(bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout): Dropout(p=0.5, inplace=False)
(conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
(residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
)
)
(abpFc): Linear(in_features=2814, out_features=32, bias=True)
(fullLinear1): Linear(in_features=32, out_features=16, bias=True)
(fullLinear2): Linear(in_features=16, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
Training Loop
0%| | 0/212 [00:00<?, ?it/s]
100%|██████████| 212/212 [02:27<00:00, 1.44it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 21:55:26.189060] Completed epoch 0 with training loss 0.46038824, validation loss 0.60499549 Validation loss improved to 0.60499549. Model saved.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 21:58:12.981925] Completed epoch 1 with training loss 0.44364360, validation loss 0.67117858 No improvement in validation loss. 1 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 22:00:59.533376] Completed epoch 2 with training loss 0.43795389, validation loss 0.72728401 No improvement in validation loss. 2 epochs without improvement.
100%|██████████| 212/212 [02:24<00:00, 1.47it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 22:03:44.969729] Completed epoch 3 with training loss 0.43307063, validation loss 0.75527412 No improvement in validation loss. 3 epochs without improvement.
100%|██████████| 212/212 [02:24<00:00, 1.47it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 22:06:30.680846] Completed epoch 4 with training loss 0.42817578, validation loss 0.77707171 No improvement in validation loss. 4 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 22:09:17.243281] Completed epoch 5 with training loss 0.42635915, validation loss 0.79353154 No improvement in validation loss. 5 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.45it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 22:12:04.250652] Completed epoch 6 with training loss 0.42626557, validation loss 0.79275078 No improvement in validation loss. 6 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 22:14:50.786397] Completed epoch 7 with training loss 0.42534384, validation loss 0.77055573 No improvement in validation loss. 7 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 22:17:37.244598] Completed epoch 8 with training loss 0.42511353, validation loss 0.72974205 No improvement in validation loss. 8 epochs without improvement.
100%|██████████| 212/212 [02:26<00:00, 1.45it/s] 100%|██████████| 36/36 [00:20<00:00, 1.72it/s]
[2024-05-01 22:20:24.406385] Completed epoch 9 with training loss 0.42452765, validation loss 0.72905684 No improvement in validation loss. 9 epochs without improvement.
100%|██████████| 212/212 [02:24<00:00, 1.47it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 22:23:09.911493] Completed epoch 10 with training loss 0.42496556, validation loss 0.70417464 No improvement in validation loss. 10 epochs without improvement.
100%|██████████| 212/212 [02:26<00:00, 1.45it/s] 100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
[2024-05-01 22:25:57.192137] Completed epoch 11 with training loss 0.42492509, validation loss 0.71812779 No improvement in validation loss. 11 epochs without improvement.
100%|██████████| 212/212 [02:29<00:00, 1.42it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 22:28:47.736082] Completed epoch 12 with training loss 0.42498419, validation loss 0.71816391 No improvement in validation loss. 12 epochs without improvement.
100%|██████████| 212/212 [02:25<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
[2024-05-01 22:31:34.544030] Completed epoch 13 with training loss 0.42436752, validation loss 0.69524848 No improvement in validation loss. 13 epochs without improvement.
100%|██████████| 212/212 [02:24<00:00, 1.46it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 22:34:20.621638] Completed epoch 14 with training loss 0.42438808, validation loss 0.69036931 No improvement in validation loss. 14 epochs without improvement.
100%|██████████| 212/212 [02:24<00:00, 1.47it/s] 100%|██████████| 36/36 [00:21<00:00, 1.71it/s]
[2024-05-01 22:37:05.849470] Completed epoch 15 with training loss 0.42330128, validation loss 0.67199934 No improvement in validation loss. 15 epochs without improvement. Early stopping due to no improvement in validation loss. Plot Validation and Loss Values from Training
Generate AUROC/AUPRC for Each Intermediate Model Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:19<00:00, 1.84it/s]
Loss: 0.6049955156114366 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.6043643002156858 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0001.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.670808924569024 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.6706025964683957 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0002.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7270203514231576 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.7275784197780821 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0003.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7548059026400248 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7561088441698639 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0004.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.7781105372640822 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7788063486417135 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0005.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.7930475556188159 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7943420338409918 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0006.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7934763630231222 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7942864491983697 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0007.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7697931196954515 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7711941958577545 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0008.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.7293662362628512 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7300568416162774 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0009.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7294305215279261 AUROC: 0.49942470309162984 AUPRC: 0.17077217989045695 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.7298368795050515 AUROC: 0.4980429909633366 AUPRC: 0.12546898674560955 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0010.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.7040574484401279 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.67it/s]
Loss: 0.704331436642894 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0011.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.68it/s]
Loss: 0.717866505185763 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.7184771028933702 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0012.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.69it/s]
Loss: 0.7182294312450621 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.718580170362084 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0013.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.70it/s]
Loss: 0.6952598392963409 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:06<00:00, 1.62it/s]
Loss: 0.6952902062071694 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0014.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:21<00:00, 1.67it/s]
Loss: 0.6903767569197549 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:07<00:00, 1.61it/s]
Loss: 0.6903442712845625 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Intermediate Model: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0015.model AUROC/AUPRC on Validation Data
100%|██████████| 36/36 [00:22<00:00, 1.60it/s]
Loss: 0.6720575557814704 AUROC: 0.5 AUPRC: 0.5918774966711052 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 AUROC/AUPRC on Test Data
100%|██████████| 108/108 [01:08<00:00, 1.59it/s]
Loss: 0.6718043088912964 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0 Plot AUROC/AUPRC for Each Intermediate Model
AUROC/AUPRC Plots - Best Model Based on Validation Loss Epoch with best Validation Loss: 0, 0.605 Best Model Based on Validation Loss: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:10<00:00, 1.53it/s]
Loss: 0.6043643002156858 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0
best_model_val_test_auroc: 0.5
best_model_val_test_auprc: 0.5899186519465427 AUROC/AUPRC Plots - Best Model Based on Model AUROC Epoch with best model Test AUROC: 0, 0.5 Best Model Based on Model AUROC: ./vitaldb_cache/models/ABP_SKIPCONNECTION_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_0.0001_LEARNING_RATE_003_MINS__ALL_MAX_CASES_0000.model Generate Stats Based on Test Data
100%|██████████| 108/108 [01:04<00:00, 1.68it/s]
Loss: 0.6043643002156858 AUROC: 0.5 AUPRC: 0.5899186519465427 Sensitivity: 1.0 Specificity: 0.0 Threshold: 0.0
best_model_auroc_test_auroc: 0.5
best_model_auroc_test_auprc: 0.5899186519465427
Results (Planned results for Draft submission)¶
When we complete our experiments, we will build comparison tables that compare a set of measures for each experiment performed. The full set of experiments and measures are listed below.
Results from Final Rubrik¶
- Table of results (no need to include additional experiments, but main reproducibility result should be included)
- All claims should be supported by experiment results
- Discuss with respect to the hypothesis and results from the original paper
- Experiments beyond the original paper
- Each experiment should include results and a discussion
- Ablation Study.
Experiments¶
- ABP only
- ECG only
- EEG only
- ABP + ECG
- ABP + EEG
- ECG + EEG
- ABP + ECG + EEG
Note: each experiment will be repeated with the following time-to-IOH-event durations:
- 3 minutes
- 5 minutes
- 10 minutes
- 15 minutes
Note: the above list of experiments will be performed if there is sufficient time and gpu capability to complete that before the submission deadline. Should we experience any constraints on this front, we will reduce our experimental coverage to the following 4 core experiments that are necessary to measure the hypotheses included at the head of this report:
- ABP only @ 3 minutes
- ABP + ECG @ 3 minutes
- ABP + EEG @ 3 minutes
- ABP + ECG + EEG @ 3 minutes
For additional details please review the "Planned Actions" in the Discussion section of this report.
Measures¶
- AUROC
- AUPRC
- Sensitivity
- Specificity
- Threshold
- Loss Shrinkage
[ TODO for final report - collect data for all measures listed above. ]
[ TODO for final report - generate ROC and PRC plots for each experiment ]
We are collecting a broad set of measures across each experiment in order to perform a comprehensive comparison of all measures listed across all comparable experiments executed in the original paper. However, our key experimental results will be focused on a subset of these results that address the main experiments defined at the beginning of this notebook.
The key experimental result measures will be as follows:
- For 3 minutes ahead of the predicted IOH event:
- compare AUROC and AUPRC for ABP only vs ABP+ECG
- compare AUROC and AUPRC for ABP only vs ABP+EEG
- compare AUROC and AUPRC for ABP only vs ABP+ECG+EEG
Model comparison¶
The following table is Table 3 from the original paper which presents the measured values for each signal combination across each of the four temporal predictive categories:
We have not yet completed the execution of the experiments necessary to determine our reproduced model performance in order determine whether our results are accurately representing those of the original paper. These details are expected to be included in the final report.
As of the draft submission, the reported evaluation measures of our model are too good to be true (all measures are 1.0). We suspect that there is data leakage in the dataset splitting process and will address this in time for the final report.
Discussion¶
Discussion (10) FROM FINAL RUBRIK¶
- Implications of the experimental results, whether the original paper was reproducible, and if it wasn’t, what factors made it irreproducible
- “What was easy”
- “What was difficult”
- Recommendations to the original authors or others who work in this area for improving reproducibility
- (specific to our group) "I have communicated with Maciej during OH. The draft looks good and I would expect some explanations/analysis on the final report on why you get 1.0 as AUROC."
- discuss our bug where we were believing we were sampling dozens of different patient samples but were just training the model on the same segments extracted from the same patient sample over and over. so we were massively overfitting our training data for one patient's data, then unwittingly using the same patient data for validation and testing, thus getting perfect classification during inference.
Feasibility of reproduction¶
Our assessment is that this paper will be reproducible. The outstanding risk is that each experiment can take up to 7 hours to run on hardware within the team (i.e., 7h to run ~70 epochs on a desktop with AMD Ryzen 7 3800X 8-core CPU w/ RTX 2070 SUPER GPU and 32GB RAM). There are a total of 28 experiments (7 different combinations of signal inputs, 4 different time horizons for each combination). Should our team find it not possible to complete the necessary experiments across all of the experiments represented in Table 3 of our selected paper, we will reduce the number of experiments to focus solely on the ones directly related to our hypotheses described in the beginning of this notebook (i.e., reduce the number of combinations of interest to 4: ABP alone, ABP+EEG, ABP+ECG, ABP+ECG+EEG). This will result in a new total of 16 experiments to run.
Planned ablations¶
Our proposal included a collection of potential ablations to be investigated:
- Remove ResNet skip connection
- Reduce # of residual blocks from 12 to 6
- Reduce # of residual blocks from 12 to 1
- Eliminate dropout from residual block
- Max pooling configuration
- smaller size/stride
- eliminate max pooling
Given the amount of time required to conduct each experiment, our team intends to choose only a small number of ablations from this set. Further, we only intend to perform ablation analysis against the best performing signal combination and time horizon from the reproduction experiments. In order words, we intend to perform ablation analysis against the following training combinations, and only against the models trained with data measured 3 minutes prior to an IOH event:
- ABP alone
- ABP + ECG
- ABP + EEG
- ABP + ECG + EEG
Time and GPU resource permitting, we will complete a broader range of experiments. For additional details, please see the section below titled "Plans for next phase".
Nature of reproduced results¶
Our team intends to address the manner in which the experimental results align with the published results in the paper in the final submission of this report. The amount of time required to complete model training and result analysis during the preparation of the Draft notebook was not sufficient to complete a large number of experiments.
What was easy? What was difficult?¶
The difficult aspect of the preparation of this draft involved the data preprocessing.
- First, the source data is unlabelled, so our team was responsible for implementing analysis methods for identifying positive (IOH event occurred) and negative (IOH event did not occur) by running a lookahead analysis of our input training set.
- Second, the volume of raw data is in excess of 90GB. A non-trivial amount of compute was required to minify the input data to only include the data tracks of interest to our experiments (i.e., ABP, ECG, and EEG tracks).
- Third, our team found it difficult to trace back to the definition of the jSQI signal quality index referenced in the paper. Multiple references through multiple papers needed to be traversed to understand which variant of the quality index
- The only available source code related to the signal quality index as referenced by our paper in [5]. Source code was not directly linked from the paper, but the GitHub repository for the corresponding author for reference [5] did result in the identification of MATLAB source code for the signal quality index as described in the referenced paper. That code is available here: https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox/tree/master/Tools/BP_Tools
- Our team had insufficient time to port this signal quality index to Python for use in our investigation, or to setup a MATLAB environment in which to assess our source data using the above MATLAB functions, but we expect to complete this as part of our final report.
Suggestions to paper author¶
The most notable suggestion would be to correct the hyperparameters published in Supplemental Table 1. Specifically, the output size for residual blocks 11 and 12 for the ECG and ABP data sets was 496x6. This is a typo, and should read 469x6. This typo became apparent when operating the size down operation within Residual Block 11 and recognizing the tensor dimensions were misaligned.
Additionally, more explicit references to the signal quality index assessment tools should be added. Our team could not find a reference to the MATLAB source code as described in reference [3], and had to manually discover the GitHub profile for the lab of the corresponding author of reference [3] in order to find MATLAB source that corresponded to the metrics described therein.
Plans for next phase¶
Our team plans to accomplish the following goals in service of preparing the Final Report:
- Implement the jSQI filter to remove any training data with aberrent signal quality per the threshold defined in our original paper.
- Execute the following experiments:
- Measure predictive quality of the model trained solely with ABP data at 3 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+ECG data at 3 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+EEG data at 3 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+ECG+EEG data at 3 minutes prior to IOH events.
- Gather our measures for these experiments and perform a comparison against the published results from our selected paper and determine whether or not we are succesfully reproducing the results outlined in the paper.
- Ablation analysis:
- Execute the following ablation experiments:
- Repeat the four experiments described above while reducing the numnber of residual blocks in the model from 12 to 6.
- Execute the following ablation experiments:
- Time- and/or GPU-resource permitting, we will complete the remaining 24 experiments as described in the paper:
- Measure predictive quality of the model trained solely with ABP data at 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+ECG data at 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+EEG data at 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained with ABP+ECG+EEG data at 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained solely with ECG data at 3, 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained solely with EEG data at 3, 5, 10, and 15 minutes prior to IOH events.
- Measure predictive quality of the model trained with ECG+EEG data at 3, 5, 10, and 15 minutes prior to IOH events.
- Additional ablation experiments:
- For the four core experiments (ABP, ABP+ECG, ABP+EEG, ABP+ECG+EEG each trained on event data occurring 3 minutes prior to IOH events), perform the following ablations:
- Repeat experiment while eliminating dropout from every residual block
- Repeat experiment while removing the skip connection from every residual block
- Repeat the four experiments described above while reducing the numnber of residual blocks in the model from 12 to 1.
- For the four core experiments (ABP, ABP+ECG, ABP+EEG, ABP+ECG+EEG each trained on event data occurring 3 minutes prior to IOH events), perform the following ablations:
References¶
- Jo Y-Y, Jang J-H, Kwon J-m, Lee H-C, Jung C-W, Byun S, et al. “Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study.” PLoS ONE, (2022) 17(8): e0272055 https://doi.org/10.1371/journal.pone.0272055
- Hatib, Feras, Zhongping J, Buddi S, Lee C, Settels J, Sibert K, Rhinehart J, Cannesson M “Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis” Anesthesiology (2018) 129:4 https://doi.org/10.1097/ALN.0000000000002300
- Bao, X., Kumar, S.S., Shah, N.J. et al. "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial." Perioperative Medicine (2024) 13:13 https://doi.org/10.1186/s13741-024-00369-9
- Lee, HC., Park, Y., Yoon, S.B. et al. VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients. Sci Data 9, 279 (2022). https://doi.org/10.1038/s41597-022-01411-5
- Li Q., Mark R.G. & Clifford G.D. "Artificial arterial blood pressure artifact models and an evaluation of a robust blood pressure and heart rate estimator." BioMed Eng OnLine. (2009) 8:13. pmid:19586547 https://doi.org/10.1186/1475-925X-8-13
- Park H-J, "VitalDB Python Example Notebooks" GitHub Repository https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb
Public GitHub Repo (5)¶
- Publish your code in a public repository on GitHub and attach the URL in the notebook.
- Make sure your code is documented properly.
- A README.md file describing the exact steps to run your code is required.
- Check “ML Code Completeness Checklist” (https://github.com/paperswithcode/releasing-research-code)
- Check “Best Practices for Reproducibility” (https://www.cs.mcgill.ca/~ksinha4/practices_for_reproducibility/)
Video Presentation (Requirements from Rubrik)¶
Walkthrough of the notebook, no need to make slides. We expect a well-timed, well-presented presentation. You should clearly explain what the original paper is about (what the general problem is, what the specific approach taken was, and what the results claimed were) and what you encountered when you attempted to reproduce the results. You should use the time given to you and not too much (or too little).
- <= 4 mins
- Explain the general problem clearly
- Explain the specific approach taken in the paper clearly
- Explain reproduction attempts clearly
print(f'All done!')
All done!